SQL String not fetching data - sql-server

I'm using sql server compact 4 and i have run into a little snag. I have a table named Counter where i store some information 0-2 and it checks the todays date in that table, grabs the CounterValue and put them into one variable each 0,1,2 and present itself as a counter. The next day the new data will have a new date and will present the same values in each variable.
So far so good, if i run the SQL query as follows
SELECT * FROM Counter WHERE Dateis = '21-05-2014'
It present the correct values but when i try the Select from Razor it nothing is executed and it's driving me bonkers.
var DateNow = DateTime.Now.ToString("dd-MM-yyyy");
var CaseCounter = db.Query("SELECT * FROM Counter WHERE Dateis ='DateNow'");
DateNow has the value of 21-05-2014, the same as the sql query i tested have but it does nothing. No data is presented. If i remove the WHERE data will be displayed but then i displays all Dates data and not the current date.
Am i missing something here?
Thanks in advance.

You need to actually patch in the value of the variable, rather than the name of the variable:
var DateNow = DateTime.Now.ToString("dd-MM-yyyy");
var CaseCounter = db.Query("SELECT * FROM Counter WHERE Dateis = '" + DateNow + "'");
Your original code is literally submitting this command to the DB:
SELECT * FROM Counter WHERE Dateis = 'DateNow'
So instead you need to close the string with the double-quote, then a concatenation operator (the + ) and then your variable, before concatenating the closing quote.

Related

how can I add datetime stamp to zip file when unload data from snowflake to s3?

I want to be able to add a timestamp the filename I'm writing to s3. So far I've been able to write files to AWS S3 using example below. Can someone guide me as to how do I go about putting datetime stamp in the file name?
copy into #s3bucket/something.csv.gz
from (select * from mytable)
file_format = (type=csv FIELD_OPTIONALLY_ENCLOSED_BY = '"' compression='gzip' )
single=true
header=TRUE;
Thanks in advance.
The syntax for defining a path inside of a stage or location portion of the COPY INTO statement does not allow for functions to dynamically define it in SQL.
However, you can use a stored procedure to accomplish building dynamic queries, using JavaScript Date APIs and some string formatting.
Here's a very trivial example for your use-case, with some code adapted from another question:
CREATE OR REPLACE PROCEDURE COPY_INTO_PROCEDURE_EXAMPLE()
RETURNS VARIANT
LANGUAGE JAVASCRIPT
EXECUTE AS CALLER
AS
$$
var rows = [];
var n = new Date();
// May need refinement to zero-pad some values or achieve a specific format
var datetime = `${n.getFullYear()}-${n.getMonth() + 1}-${n.getDate()}-${n.getHours()}-${n.getMinutes()}-${n.getSeconds()}`;
var st = snowflake.createStatement({
sqlText: `COPY INTO '#s3bucket/${datetime}_something.csv.gz' FROM (SELECT * FROM mytable) FILE_FORMAT=(TYPE=CSV FIELD_OPTIONALLY_ENCLOSED_BY='"' COMPRESSION='gzip') SINGLE=TRUE HEADER=TRUE;`
});
var result = st.execute();
result.next();
rows.push(result.getColumnValue(1))
return rows;
$$
To execute, run:
CALL COPY_INTO_PROCEDURE_EXAMPLE();
The above is missing perfected date format handling (zero padding months, days, hours, minutes, seconds), error handling (if the COPY INTO fails), parameterisation of input query, etc. but it should give a general idea on how to achieve this.
As Sharvan Kumar suggests above, Snowflake now support this:
-- Partition the unloaded data by date and hour. Set ``32000000`` (32 MB) as the upper size limit of each file to be generated in parallel per thread.
copy into #%t1
from t1
partition by ('date=' || to_varchar(dt, 'YYYY-MM-DD') || '/hour=' || to_varchar(date_part(hour, ts))) -- Concatenate labels and column values to output meaningful filenames
file_format = (type=parquet)
max_file_size = 32000000
header=true;
list #%t1
This features is not supported yet in snowflake, however will be coming soon.

How do I deal with a passed parameter that may have an apostrophe in SQL Server?

I'm using a query within a loop in TestComplete, and each time the loop completes, the variable is updated with a new value. I want to account for the possibility of an apostrophe in the variable so I can use one query. For example:
i = 0;
while(i < companyCount)
{
result = CompanyAddress(company);
Log.Message(result);
i++;
}
'CompanyAddress' is the query I have stored in another script file, and 'company' is the variable being passed.
SELECT address FROM table WHERE name = '" + company + "';
I tried REPLACE(), but that didn't fix the problem when it got to the second iteration.
To escape apostrophe inside a SQL string you use double apostrophe:
SET #name = "Dan''s example"
But the best way is to parametrize the query, which also heps preventing SQL injection.
I incorrectly used the REPLACE() method I mentioned in my initial explanation.
I sent the variable with the single quote to the query:
var test = "agwh'waf'2";
but when I included it in the query, I used the following:
"select * from table where name = " + test.replace(/'/g, "\'\'");
This replaced single quotes in my variable with two single quotes before the query was run.

is there something faster than Enumerable.Except<TSource> Method?

I have a program that downloads data from server database to client database. server database keeps growing recently.
in that program, there is an option to select download all data OR download data for a specific time period (can select backward days from today). if the user selects all, I wrote the program to truncate client database table and insert all data using bulk copy. that part is ok.
but the problem is when user select a specific time period (each recode has created data time ) program has to compare two tables and divide recodes (server data) in two tables. one is, not exist data and the second one is not existing data. and what I'm going to do is,
not existing data directly insert into client DB (i'm using bulk insert) and Existing data inserting into a tempory table using bulkcopy and after update client's table using the above tempory table. My actual problem occurs when dividing server's table. this is how I did it
updateTable = (From c In dt_from_server.AsEnumerable()
Join o In Dt_from_client.AsEnumerable()
On c.Field(Of String)("BARCODE").Trim() Equals o.Field(Of String)("BARCODE").Trim()
And c.Field(Of String)("ITEM_CODE").Trim() Equals o.Field(Of String)("ITEM_CODE").Trim()
Select c).CopyToDataTable()
insertTable = dt_server.AsEnumerable()
.Except(updateTable.AsEnumerable(), DataRowComparer.Default)
.CopyToDataTable()
(normally there is over 1M recodes in the server table )
when there is over 1 Milion recodes, Update part taking acceptable time like 10 minutes (Yes it taking 5GB space from Ram - in this case, it's ok when considering performance )
but insert part seams taking days, just to assing the insertTable(datatable). this is the issue.
AsEnumerable().Except() part taking long time and I couldn't find a solution speedup this process. I'm not sure I explained this correctly. Could anyone can give me some advice for this?
Since you have commented that dt_from_server and dt_server are actually the same DataTable you don't need to compare all values of all DataRows with each other, which is what DataRowComparer.Default does. You can use Except without second parameter for the comparer, then only references are compared which is much faster.
You also don't need two CopyToDataTable which creates two additonal big DataTables in memory, process the rows one after the other.
Here is a different approach using Linq's left-outer join, which is more efficient:
Dim query = from rServ in dt_from_server.AsEnumerable()
group join rClient in Dt_from_client.AsEnumerable()
On New With{
Key .BarCode = rServ.Field(Of String)("BARCODE").Trim(),
Key .ItemCode = rServ.Field(Of String)("ITEM_CODE").Trim()
} Equals New With{
Key .BarCode = rClient.Field(Of String)("BARCODE").Trim(),
Key .ItemCode = rClient.Field(Of String)("ITEM_CODE").Trim()
} into Group
From client In Group.DefaultIfEmpty()
Select new With { .ServerRow = rServ, .InsertRow = client is Nothing }
Dim insertOrUpdateRows = query.ToLookup(Function(x) x.InsertRow, Function(x) x.ServerRow)
Dim insertRows = insertOrUpdateRows(true).CopyToDataTable() 'CopyToDataTable redundant if you process rows immediately now'
Dim updateRows = insertOrUpdateRows(false).CopyToDataTable() 'CopyToDataTable redundant if you process rows immediately now'
But in general the most scalable and efficient approach would be to not load all into memory at once and then process all, but to use database paging(or a stored-procedure) to process only parts of it in memory, otherwise it's likely that you will encounter a OutOfMemoryException sooner or later.
C# as requested:
var query = from rServ in dt_from_server.AsEnumerable()
join rClient in Dt_from_client.AsEnumerable()
on new { BarCode = rServ.Field<string>("BARCODE").Trim(), ItemCode = rServ.Field<string>("ITEM_CODE").Trim() }
equals new { BarCode = rClient.Field<string>("BARCODE").Trim(), ItemCode = rClient.Field<string>("ITEM_CODE").Trim() }
into clientGroup
from client in clientGroup.DefaultIfEmpty()
select new { ServerRow = rServ, InsertRow = client == null };
var insertOrUpdateRows = query.ToLookup(x => x.InsertRow, x => x.ServerRow);
var insertRows = insertOrUpdateRows[true].CopyToDataTable(); // CopyToDataTable redundant if you process rows immediately now
var updateRows = insertOrUpdateRows[false].CopyToDataTable(); // CopyToDataTable redundant if you process rows immediately now

SQL Server: Get Latest Auto-Increment Value [duplicate]

I am creating a winform application in c#.and using sql database.
I have one table, employee_master, which has columns like Id, name, address and phone no. Id is auto increment and all other datatypes are varchar.
I am using this code to get the next auto increment value:
string s = "select max(id) as Id from Employee_Master";
SqlCommand cmd = new SqlCommand(s, obj.con);
SqlDataReader dr = cmd.ExecuteReader();
dr.Read();
int i = Convert.ToInt16(dr["Id"].ToString());
txtId.Text = (i + 1).ToString();
I am displaying on a textBox.
But when last row from table is deleted, still I get that value which is recently deleted in textbox
How should I get the next autoincrement value?
To get the next auto-increment value from SQLServer :
This will fetch the present auto-increment value.
SELECT IDENT_CURRENT('table_name');
Next auto-increment value.
SELECT IDENT_CURRENT('table_name')+1;
------> This will work even if you add a row and then delete it because IDENT_CURRENT returns the last identity value generated for a specific table in any session and any scope.
try this:
SELECT IDENT_CURRENT('tbl_name') + IDENT_INCR('tbl_name');
If you are using Microsoft SQL Server. Use this statement to get current identity value of table. Then add your seed value which you have specified at time of designing table if you want to get next id.
SELECT IDENT_CURRENT(<TableName>)
As for me, the best answer is:
dbcc checkident(table_name)
You will see two values (probably same)
current identity value , current column value
When you delete a row from the table the next number will stay the same as it doesnt decrement in any way.
So if you have 100 rows and you deleted row 100. You would have 99 rows but the next number is still going to be 101.
select isnull((max(AddressID)+1),1) from AddressDetails
the max(id) will get you maximum number in the list pf employee_master
e.g. id = 10, 20, 100 so max will get you 100
But when you delete the record it must have been not 100
So you still get 100 back
One important reason for me to say this might be the issue because you are not using order by id in your query
For MS SQL 2005 and greater:
Select Cast(IsNULL(last_value,seed_value) As Int) + Cast(increment_value As Int) As NextID
From sys.identity_columns
WHERE NAME = <Table_Name>
Just a thought, if what you wanted was the last auto-number that you inserted on an already open connection try using:
SELECT ##IDENTITY FROM...
from that connection. That's the best way to keep track of what has just happened on a given connection and avoids race conditions w/ other connections. Getting the maximum identity is not generally feasible.
SqlConnection con = new SqlConnection("Data Source=.\SQLEXPRESS;Initial Catalog=databasename;User ID=sa;Password=123");
con.Open();
SqlCommand cmd = new SqlCommand("SELECT TOP(1) UID FROM InvoiceDetails ORDER BY 1 DESC", con);
SqlDataReader reader = cmd.ExecuteReader();
//won't need a while since it will only retrieve one row
while (reader.Read())
{
string data = reader["UID"].ToString();
//txtuniqueno.Text = data;
//here is your data
//cal();
//txtuniqueno.Text = data.ToString();
int i = Int32.Parse(data);
i++;
txtuid.Text = i.ToString();
}

How to separate the corresponding column value in the scriptdb query statement in google apps script

While I'm retrieving the values from the scriptdb it returns the entire db while using the below coding.
var db = ScriptDb.getMyDb();
var result = db.query({});
While using the following coding it retrieves the corresponding row fully which satisfies the condition.
var db = ScriptDb.getMyDb();
var result = db.query({p_cost:324});
I want to get the specified column value in the specified row which I retrieved already by using the above coding. Is there any possibility to get the specified column value from the scriptdb? We are write the query in traditional database as follows.
SELECE <COL_NAME> FROM <TABLE_NAME>
Try this:
while(result.hasnext()){
res = result.next();
var p_cost = res.p_cost;
}

Resources