I have searched many forums and found some cases of people also reporting slower rendering in RS than in Management Studio / Query Analyzer. However, none of the other solution suggestions seem to make a difference for me.
I'm a VS/VB developer and have got multiple reports built -- all using stored procedures on the backend -- that all take many times longer to run than if executed via Management Studio (SSMS). My simplest proc takes a couple of parameters (no defaults included) and does a simple select against one table with a few joins. Nothing complicated. It runs in 8 secs for 6867 rows via SSMS. Through RS (running locally through Visual Studio 2005 at this point) it takes around 25-28 secs. Yet, when I'm in the report on the DATA tab (not the PREVIEW tab) the run takes the expected 8 secs ?!?!
All reports are behaving this way.
I am not using cursors.
I have no default values on parameters.
I have added the "WITH RECOMPILE" to the proc statement.
I have "SET NOCOUNT ON" as the first line of the proc.
I hate to say this, but I even connected the proc to Crystal Reports to see how it behaved. It ran in the expected 8 secs.
I've seen some mention by someone that perhaps this is a known issue of RS that it reads the proc twice. Any truth to this?
Also a couple posts have traced and demonstrated that the report is generating significantly more data "reads" via RS than through SSMS.
We're a shop that is considering a switch from Crystal to RS, but we do everything through stored procedures. I need to clear up this issue before I can go forward recommending a switch. I'm including a copy of a typical proc below for review... What am I missing? What's the deal here with RS?
IF OBJECT_ID('dbo.rpt_InactiveAccounts') IS NOT NULL
DROP PROCEDURE dbo.rpt_InactiveAccounts
GO
CREATE PROCEDURE dbo.rpt_InactiveAccounts
(@.pRunDate datetime
,@.pSalesperson varchar(5000)
,@.pIncludeOpen char(1)
)
WITH RECOMPILE
AS
SET NOCOUNT ON
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
-- CREATE/SETUP TEMP TABLE FOR USE IN PARSING MULTI-VALUED STRING INPUTS
DECLARE @.NumberPivot TABLE (NumberID INT PRIMARY KEY)
DECLARE @.intLoopCounter INT
SELECT @.intLoopCounter =0
WHILE @.intLoopCounter <=4999 BEGIN
INSERT INTO @.NumberPivot
VALUES (@.intLoopCounter)
SELECT @.intLoopCounter = @.intLoopCounter +1
END
-- CREATE TEMP TABLES TO HOLD PARSED VALUES FROM MULTI-VALUE STRING INPUT PARAMETERS
DECLARE @.SalespersonTable TABLE
(tmpSalesperson varchar(30))
-- PARSE OUT @.pSALESPERSON PARAMETER AND STORE VALUES IN TEMP TABLE
INSERT INTO @.SalespersonTable
SELECT SUBSTRING(',' + @.pSalesperson + ',', NumberID + 1,
CHARINDEX(',', ',' + @.pSalesperson + ',', NumberID + 1) - NumberID -1)
FROM @.NumberPivot
WHERE NumberID <= LEN(',' + @.pSalesperson + ',') - 1
AND SUBSTRING(',' + @.pSalesperson + ',', NumberID, 1) = ','
SELECT DISTINCT
CASE
WHEN s.Name IS NULL THEN '<< OPEN >>'
ELSE s.Name
END As SalespersonName
,c.ClassId
,c.CustId
,c.Name
,c.Addr1
,c.Addr2
,c.Addr3
,c.City
,State
,CASE
WHEN Len(c.Zip) = 9 And CharIndex(' ', c.Zip, 0) = 0 THEN Left(c.Zip, 5) + '-' + Right(c.Zip, 4)
ELSE c.Zip
END As Zip
,ac1.descr As Terms
,ac2.descr As Status
FROM
ACTCustomer c (NOLOCK)
LEFT OUTER JOIN CustSales cs ON c.CustId = cs.CustId
LEFT OUTER JOIN ACTSalesperson s ON cs.SlsId = s.SalesId
INNER JOIN @.SalespersonTable st ON s.Name = st.tmpSalesperson OR (s.Name IS NULL AND @.pIncludeOpen = 'Y')
INNER JOIN ACTCode ac1 ON ac1.Code = c.Terms And ac1.FieldId = 'CustTerms'
INNER JOIN ACTCode ac2 ON ac2.Code = c.Status And ac2.FieldId = 'Status'
WHERE
c.LastInvcDate <= @.pRunDate
And c.ClassId <> 'TR'
ORDER BY
SalespersonName, Name, CustId
Hi Dave,
I don't think that this will solve you performance issue but I have a recommendation for you, since you are passing some comma separated list to your reports as parameter, maybe you should create a user-defined function that will convert you list into a table. This would allow you to reuse this same function in all of you reports.
Sample Here:
Here is an example of a function that would convert comma separated list into a table:
http://blogs.vandamme.com/development/2007/06/parse_comma_sep.html
Then you could modify your SQL statement in the stored procedure tu use this function, something like this should do:
SELECT <Fields>
FROM <Table> a
JOIN <Function> (@.CommaSeparatedList) b ON b.uid = a.id
For the performance issue, did you try to profile and compare your SSMS execution and the Report execution?
HTH,
No comments:
Post a Comment