SSRS truncating parameter - sql-server

I have an SSRS report that populates a parameter with a stored procedure. This query works as expected. When the parameter is used in running the report, the parameter is being truncated. I choose value ABCD, but the report returns values for ABC. The stored procedure I am passing the parameter to runs perfectly in SSMS and returns ABCD data. When I test the query in the query designer or I run the report, I get ABC data. How do I get SSRS to pass in the entire parameter?

Parameters are strings without a set length. There's nothing to truncate your values. Have you checked the values to make sure that the Value and Label are the same?
The label is what you see (ABCD) while the value is what is actually passed in the parameter. I don't know if that would be your problem if it works on your local machine though.
If that doesn't work you can try deleting the parameter and recreating it - it shouldn't work but has before.

I had something like this happen to me before, try going into the stored proc you have and creating an output table for your parameters results to go into that is constant, i.e.
declare #mytable table (returnval varchar(50))
make sure you make one column in the table for the results of your query you will return and make sure you make the data type a varchar with enough pace to hold any possible return selection values. Note: do not use nvarchar as I find this still sometimes truncates the values and cuts things off.
you will want to execute the query the same way for the proc you originally did but this time inserting the values into your temp table i.e.
insert into #mytable
select * from table.name
this will insert all values into your temp table, also this table now has a set value length for all return values that will not change, you could try and do this the same way with the original query by using a set length field for the table the problem is there are many factors that exist that can change this, here are a few examples.
If you have 2 select statements with a union to get results from both that you want to use for final results then each field may have a set length and data type that differs from the other, sql server will give the best data type in the results but when it sends the data over to ssrs it can interpret it differently as another length value unless set.
You may have multiple data types you are using in the return field and ssrs is getting confused which to use i.e. interger values, varchar, text, etc. this sort of reinforces the one above.
Also another possibility is it could be happening on the SSRS side of things and not the proc, but by doing this method you eliminate out one possible cause which is the stored proc.
Also, check in the configuration settings of your report on the back end and make sure that the return value is set from the query as it should be but also make sure that there is no setting specified for a return data type, this sometimes happens if at report design you create the report first and use a static parameter for testing to get the report created first and then you specify later the parameter is a list from a query result set which I have had happen as well.
Finally one good practice each time you make a change to something is to ensure that the report is being generated each time and not showing a cached version of the report which makes it look like nothing changes on the report each time. the way to do this is to close the report each time or when you run the report also try and make sure you hit the reload on the page after you run the report to force a reload to make sure you see the differences each time.
I think if you do all of this you will either find the issue or eliminate it or both as I have so many times.

Related

SSRS Running Datasets Multiple Times When I Change Parameter Values

I'm seeing some weird behavior I can't quite figure out. I have a report with a bunch of parameters, none of them are cascading. A handful of the parameters are set to allow multi-select and the available/default values are retrieved from data sets that used stored procedures.
When I load a report every data set executes, which I expect. However, if I change a parameter value, such as the date for the date parameter, every data set executes again. Why is this?
What's compounding the situation is I have 2 copies of each parameter (Param1, Param1_Internal, Param2, Param2_Internal, etc.). I have it setup like that for formula's I'm using in the report to determine if the user selected 'Select All' for the multi-select parameters. So for example I'll have Param1 and Param1_Internal set from data set 1, Param2 and Param2_Internal set from data set 2. When I change an unrelated parameter data set 1 will execute twice, and then data set 2 executes twice.
Any suggestions to:
Get these data sets to run once each, even though one data set feeds available values for 2 parameters?
Stop the data sets from running every time I change an unrelated parameter?
I am currently on SQL Server 2016.
Thanks
Edit
So, I found the answer to part of my question. In some cases I am using an expression as the parameter value in the data set that calls my stored procedure. It looks like when you do this, SSRS will execute that data set every time you change any parameter value. I still am hoping someone will have advice on how to have a dataset run once in a situation where I have 2 parameters using it for default/available values.
You can cache data set results if they are in a shared dataset stored on the SSRS server. This way the first time you run it will execute the query, then the second will just pull from the cache:
https://learn.microsoft.com/en-us/sql/reporting-services/work-with-shared-datasets-web-portal#caching
Use with caution though, as if the backing values of your parameters change often you could get inconsistent results with your reporting.
You can stop the refresh on parameter change by making sure the below is set within the parameter properties:

SQL - Compare 2 text fields

I’m using a software known as FME Desktop. In this software we can issue SQL commands through an item called a transformer. I’m using a transformer called a SQLExecutor that uses a very simple query to make a comparison. Below is an explanation of what I’m trying to do with this SQL Query and the fact that it does not work when trying to compare 2 text fields.
I believe my issue is a limitation of SQL when used in the SQLExecutor. Let's say I have a layer of data called TEST.LEASE and I want to compare it to a layer called EDIT.LEASE based on one unique ID field. Both of these layers are in the same database. We use SQL Server for our stored data. There is a TEXT field in both layers called GIS_ID. This is a unique ID field. So what happens is we get updates on our LEASE layer and they start off being loaded to TEST.LEASE. When we have done our QA/QC of the data and we are satisfied that they are ready to be uploaded to EDIT.LEASE we then run an FME job that serves as our promotion tool. What this promotion tool does is that it checks various fields in TEST.LEASE to make sure they qualify for being uploaded (this part works 100% without issue).
Right before they are promoted to EDIT.LEASE we need to know if this will be a completely new record, in which case we will do an INSERT with FME. If by chance the GIS_ID already exists then we need to do an UPDATE to those records. The tool we have works perfectly for determining if it is an INSERT or UPDATE, except for one seemingly small thing … IT ONLY WORKS IF THE TEXT FIELD CONTAINS A NUMBER THAT DOESN’T HAVE A LETTER IN IT.
FYI: Someone at our company decided to make the GIS_ID field a text field. In my opinion it should have been an integer field because comparisons would have been super easy. But I can't change that now, it has already been decided by people who make way more money than I do that it will be a text field.
As mentioned … The GIS_ID is a text field (in both layers and they are both the same size, there is no difference in the field in both layers). As you may know, SQL doesn't care if it is a TEXT field or an INTEGER field when all that is contained in that field is a number. It can still compare 202 to 202 to see if they are equal to each other. For my example let's say I have a record in both TEST.LEASE and EDIT.LEASE where both of their GIS_ID fields equal 09198760. When I run the query below it runs perfectly.
select OBJECTID
from TEST.LEASE_UPDATE_INSERT_WRITER
where GIS_ID = #Value(GIS_ID)
It runs perfectly, as I’ve mentioned, on the data if both GIS_ID text fields have only numbers in them. But if just one record contains an actual alpha, the SQL query will error out.
So if GIS_ID has 09198760a01 once the query reaches the “a” in GIS_ID a SQL error is returned. I’m not looking for a way for the job to continue and ignore those records, because I need ALL OF THE RECORDS to load. I need to know if anyone would know how to add to or rewrite the query above so that it loads both “number only text fields” and “numbers containing a letter fields.”
I hope that long explanation is clear. Please let me know if it isn’t. Thanks for any help you might be able to provide for me
Sincerely,
Tex
I am assuming that the #value is the function that is causing you problems. I briefly checked their docs. it looks like you need to encapsulate like so '#value(GIS)'
http://fmepedia.safe.com/articles/How_To/Executing-a-Stored-Procedure-on-Microsoft-SQL-Server-with-FME
Jeff is right and as a generic answer for regular sql users and even people using sql in their application code, if you are comparing text like the op mentioned, then you need to use single ' quotes '.
Where avalue = 'myvalue'
Otherwise sql server thinks it is an int, hence why it works when the value he's passing in is only numbers. It's not always easy to tell what the problem is when you're passing in parameters.
Where avalue = #myvalue
So you'll need to pay attention to that. Just wanted to mention this so maybe it helps someone else with a similar issue. I figured this out when we were getting errors from a field that had concatenated an id field i.e. it worked when the value = 2, but not 2,3 etc. Wrapping the parameter in single quotes easily fixed that as we were truly only concerned with value = '2' in our case.
Hope this makes sense.

Crystal Reports using multiple results from a Stored Procedure

I have a stored proc in sql-server and one of the parameters it returns is a string with the query parameters. I display those query parameters at the top of the report. That works great if something is found, not so great if nothing was found.
We have tried returning two query results, one the data set that I will make the report from (which includes the query parameters), the other the query parameter string. Crystal appears to only see the first data set, and this very old discussion (http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=42462) says that is not something that will work. But that was over 5 years ago, and I'm hoping things have changed.
The problem, is if nothing is returned, the report is so blank that the person doesn't even know what query parameters they used. If they could see that they queried something that doesn't return any results, that would be useful.
So, if I have at the end of my stored proc:
SELECT * FROM [#ResultSet]
select #SearchCriteria as SearchCriteria
I'd like to be able to display the SearchCriteria even if there is nothing in the #ResultSet. Can it be done with this version of Crystal? Is there another way to do this?
Unless as stated by the first answer the results of one procedure have the same number of columns of another procedure (this includes type), if this is the case you can UNION the results or UNION ALL the results (if you want duplicates) to get ONE resultant set.
If the types or columns are not the same then you cannot do this. The only other option you can do is to merge all the relevant data into a temp table and then return the results from that temp table (SELECT * FROM #temp)
How are you currently able to display the parameters when results are found?
You haven't mentioned how you are using the Crystal Report in your environment.
Typically, I've done criteria display by passing the parameters to the Crystal Report as Report Parameters, and then using them in fields. This assumes you are calling it from a client application in some way.
Another option is to load the results into client datatables and binding to that as a datasource, it's certainly possible to handle the multiple result sets that way.

Traversing multiple CSV in SQL

I have a SQL Server 2008 database. This database has a stored procedure that will update several records. The ids of these records are stored in a parameter that is passed in via a comma-delimited string. The property values associated with each of these ids are passed in via two other comma-delimited strings. It is assumed that the length (in terms of tokens) and the orders of the values are correct. For instance, the three strings may look like this:
Param1='1,2,3,4,5'
Param2='Bill,Jill,Phil,Will,Jack'
Param3='Smith,Li,Wong,Jos,Dee'
My challenge is, I'm not sure what the best way to actually go about parsing these three CSVs and updating the corresponding records are. I have access to a procedure named ConvertCSVtoTable, which converts a CSV to a temp table of records. So Param1 would return
1
2
3
4
5
after the procedure was called. I thought about a cursor, but then it seems to get really messy.
Can someone tell me/show me, what the best way to address this problem is?
I'd give some thought to reworking the inputs to your procedure. Since you're running SQL 2008, my first choice would be to use a table-valued parameter. My second choice would be to pass the parameters as XML. Your current method, as you already know, is a real headache and is more error prone.
You can use bulk load to insert values to tmp table after that PIVOT them and insert to proper one.

Can't change data type on MS Access 2007

I have a huge database (800MB) which consists of a field called 'Date Last Modified' at the moment this field is entered as a text data type but need to change it to a Date/Time field to carry out some queries.
I have another exact same database but with only 35MB of data inside it and when I change the data type it works fine, but when I try to change data type on big database it gives me an error:
Micorosoft Office Access can't change the data type.
There isn't enough disk space or memory
After doing some research some sites mentioned of changing the registry file (MaxLocksPerFile) tried that as well, but no luck :-(
Can anyone help please?
As John W. Vinson says here, the problem you're running into is that Access wants to hold a copy of the table while it makes the changes, and that causes it to exceed the maximum allowable size of an Access file. Compacting and repairing might help get the file under the size limit, but it didn't work for me.
If, like me, you have a lot of complex relationships and reports on the old table that you don't want to have to redo, try this variation on #user292452's solution instead:
Copy the table (i.e. 'YourTable') then paste Structure Only back
into your database with a different name (i.e. 'YourTable_new').
Copy YourTable again, and paste-append the data to YourTable_new.
(To paste-append, first paste, and select Append Data to Existing
Table.)
You may want to make a copy of your Access database at this point,
just in case something goes wrong with the next part.
Delete all data in YourTable using a delete query---select all
fields, using the asterisk, and then run with default settings.
Now you can change the fields in YourTable as needed and save
again.
Paste-append the data from YourTable_new to YourTable, and check
that there were no errors from type conversion, length, etc.
Delete YourTable_new.
One relatively tedious (but straightforward) solution would be to break the big database up into smaller databases, do the conversion on the smaller databases, and then recombine them.
This has an added benefit that if, by some chance, the text is an invalid date in one chunk, it will be easier to find (because of the smaller chunk sizes).
Assuming you have some kind of integer key on the table that ranges from 1 to (say) 10000000, you can just do queries like
SELECT *
INTO newTable1
FROM yourtable
WHERE yourkey >= 0 AND yourkey < 1000000
SELECT *
INTO newTable2
FROM yourtable
WHERE yourkey >= 1000000 AND yourkey < 2000000
etc.
Make sure to enter and run these queries seperately, since it seems that Access will give you a syntax error if you try to run more than one at a time.
If your keys are something else, you can do the same kind of thing, but you'll have to be a bit more tricky about your WHERE clauses.
Of course, a final thing to consider, if you can swing it, is to migrate to a different database that has a little more power. I'm guessing you have reasons that this isn't easy, but with the amount of data you're talking about, you'll probably be running into other problems as well as you continue to use Access.
EDIT
Since you are still having some troubles, here is some more detail in the hopes that you'll see something that I didn't describe well enough before:
Here, you can see that I've created a table "OutputIDrive" similar to what you're describing. I have an ID tag, though I only have three entries.
Here, I've created a query, gone into SQL mode, and entered the appropriate SQL statement. In my case, because my query only grabs value >= 0 and < 2, we'll just get one row...the one with ID = 1.
When I click the run button, I get a popup that tells/warns me what's going to happen...it's going to put a row into a new table. That's good...that's what we're looking for. I click "OK".
Now our new table has been created, and when I click on it, we can see that our one line of data with ID = 1 has been copied over to this new table.
Now you should be able to just modify the table name and the number values in your SQL query, and run it again.
Hopefully this will help you with whatever tripped you up.
EDIT 2:
Aha! This is the trick. You have to enter and run the SQL statements one at a time in Access. If you try to put multiple statements in and run them, you'll get that error. So run the first one, then erase it and run the second one, etc. and you should be fine. I think that will do it! I've edited the above to make it clearer.
Adapted from Karl Donaubauer's answer on an MSDN post:
Switch to immediate window (Ctl + G)
Execute the following statement:
DBEngine.SetOption dbMaxLocksPerFile, 200000
Microsoft has a KnowledgeBase article that addresses this problem directly and describes the cause:
The page locks required for the transaction exceed the MaxLocksPerFile value, which defaults to 9500 locks. The MaxLocksPerFile setting is stored in the Windows registry.
The KnowledgeBase article says it applies to Access 2002 and 2003, but it worked for me when changing a field in an .mdb from Access 2013.
It's entirely possible that in a database of that size, you've got text data that won't convert to a valid Date/Time.
I would suggest (and you may hate me for this) that you export all those prospective date values from "Big" and go through them (perhaps in Excel) to see which ones are not formatted the way you'd expect.
Assuming that the error message is accurate, you're running up against a disk or memory limitation. Assuming that you have more than a couple of gigabytes free on your disk drive, my best guess is that rebuilding the table would put the database (including work space) over the 2 gigabyte per file limit in Access.
If that's the case you'll need to:
Unload the data into some convenient format and load it back in to an empty database with an already existing table definition.
Move a subset of the data into a smaller table, change the data type in the smaller table, compact and repair the database, and repeat until all the data is converted.
If the error message is NOT correct (which is possible), the most likely cause is a bad or out-of-range date in your text-date column.
Copy the table (i.e. 'YourTable') then paste just its structure back into your database with a different name (i.e. 'YourTable_new').
Change the fields in the new table to what you want and save it.
Create an append query and copy all the data from your old table into the new one.
Hopefully Access will automatically convert the old text field directly to the correct value for the new Date/Time field. If not, you might have to clear out the old table and re-append all the data and use a string to date function to convert that one field when you do the append.
Also, if there is an autonumber field in the old table this might not work because there is no way to ensure that the old autonumber values will line up with the new autonumber values that get assigned.
You've been offered a bunch of different ways to get around the disk space error message.
Have you tried adding a new field to your existing table using Date data type and then updating the field with the value the existing string date field? If that works, you can then delete the old field and rename the new one to the old name. That would probably take up less temp space than doing a direct conversion from string to date on a single field.
If it still doesn't work, you may be able to do it with a sceond table with two columns, the first long integer (make it the primary key), the second, date. Then append the PK and string date field to this empty table. Then add a new date field to the existing table, and using a join, update the new field with the values from the two-column table.
This may run into the same problem. It depends on number of things internal to the Jet/ACE database engine over which we have no real control.

Resources