Parameters in BULK COPY statements in npgsql - npgsql

I've read the information about BULK COPY page on Npgsql's webpage here. Yet looking at the BULK COPY BeginBinaryExport() and BeginBinaryImport() methods, they both take strings. How would one construct a SQL injection-safe version of a query for BeginBinaryImport() that took query parameters, e.g. didn't return all the rows of a table but only the those that passed a certain filter, such as being on a certain date?

Unfortunately this isn't currently supported. I've opened issue https://github.com/npgsql/npgsql/issues/3841 to track this.
In the meantime you'll have to interpolate parameters as strings into the query, and protect against SQL injection yourself.

Related

User Defined Table Function with Procedural Logic

Our company is setting up a new Snowflake instance, and we are attempting to migrate some processing currently being done is MS SQL Server. I need to migrate a Table-Valued SQL Function into snowflake. The source function has procedural logic in it, which is not allowed to my knowledge in snowflake UDTFs. I have been searching for a workaround with no success.
To be as specific as I can, I need a function that will take a string for input, decode that string, and return a table with the keys and their corresponding values. I cannot condense all of the logic to split the string and decode the keys into one SQL statement, so Snowflake SQL UDTFs will not work.
I looked into whether a UDTF can call a procedure and somehow I could simply return a result, but it does not look like that will work. Please let me know if there is any way to work around this.
I think Javascript UDTF is what you're looking for in Snowflake:
https://docs.snowflake.com/en/sql-reference/udf-js-table-functions.html
funny I just stumbled onto this as I'm running into the same thing myself. I found there is a SPLIT_TO_TABLE function that may be able to accomplish this. As Greg suggested nesting this together in the form of a CTE combined with a JOIN may allow you to accomplish what you need to do.

SQL Server : parameters for column names instead of values

This might seem like a silly question, but I'm surprised that I didn't find a clear answer to this already:
Is it possible to use SQL Server parameters for writing a query with dynamic column names (and table names), or does the input just need to be sanitized very carefully?
The situation is that tables and their column names (and amount of columns) are generated dynamically and there is no way to know beforehand to manually write a query. Since the tables & columns aren't known I can't use an ORM, so I'm resorting to manual queries. Usually I'd use parameters to fill in values to prevent SQL injection, however I'm pretty sure that this cannot be done the same way when specifying the table name and/or column names. I want to create generic queries for insert, update, upsert, and select, but I obviously don't want to open myself up to potential injection. Is there a best practices on how to accomplish this safely?
Just as an FYI - I did see this answer, but since there's no way for me to know the column / table names beforehand a case statement probably won't work for this situation.
Environment: SQL Server 2014 via ADO.NET (.NET 4.5 / C#)
There is no mechanism for passing table or column references to procedures. You just pass them as strings and then use dynamic SQL to build your queries. You do have to take precautions to ensure that your string parameters are valid.
One way to do this would be to validate that all table and column reference strings have valid names in sys.tables and sys.columns before building your T-SQL queries. Then you can be sure that they can be used safely.
You can also use literal parameters with dynamic sql when using the sp_executesql procedure. You can't use it to validate your table and column names, but it validates and prevents SQL injection with your other parameters.

ADO - Can I edit results of a complex query with multiple join statements?

I'm working on a data conversion utility which can push data from one master database out to a number of different databases. The utility its self will have no knowledge of how data is kept in the destination (table structure), but I would like to provide writing a SQL statement to return data from the destination using a complex SQL query with multiple join statements. As long as the data is in a standardized format that the utility can recognize (field names) in an ADO query.
What I would like to do is then modify the live data in this ADO Query. However, since there are multiple join statements, I'm not sure if it's possible to do this. I know at least with BDE (I've never used BDE), it was very strict and you had to return all fields (*) and such. ADO I know is more flexible, but I don't know quite how flexible in this case.
Is it supposed to be possible to modify data in a TADOQuery in this manner, when the results include fields from different tables? And even if so, suppose I want to append a new record to the end (TADOQuery.Append). Would it append to two different tables?
The actual primary table I'm selecting from has a complimentary table which is joined by the same primary key field, one is a "Small" table (brief info) and the other is a "Detail" table (more info for each record in Small table). So, a typical statement would include something like this:
select ts.record_uid, ts.SomeField, td.SomeOtherField from table_small ts
join table_detail td on td.record_uid = ts.record_uid
There are also a number of other joins to records in other tables, but I'm not worried about appending to those ones. I'm only worried about appending to the "Small" and "Detail" tables - at the same time.
Is such a thing possible in an ADO Query? I'm willing to tweak and modify the SQL statement in any way necessary to make this possible. I have a bad feeling though that it's not possible.
Compatibility:
SQL Server 2000 through 2008 R2
Delphi XE2
Editing these Fields which have no influence on the joins is usually no problem.
Appending is ... you can limit the Append to one of the Tables by
procedure TForm.ADSBeforePost(DataSet: TDataSet);
begin
inherited;
TCustomADODataSet(DataSet).Properties['Unique Table'].Value := 'table_small';
end;
but without an Requery you won't get much further.
The better way will be setting Values by Procedure e.g. in BeforePost, Requery and Abort.
If your View would be persistent you would be able to use INSTEAD OF Triggers
Jerry,
I encountered the same problem on FireBird, and from experience I can tell you that it can be made(up to a small complexity) by using CachedUpdates . A very good resource is this one - http://podgoretsky.com/ftp/Docs/Delphi/D5/dg/11_cache.html. This article has the answers to all your questions.
I have abandoned the original idea of live ADO query updates, as it has become more complex than I can wrap my head around. The scope of the data push project has changed, and therefore this is no longer an issue for me, however still an interesting subject to know.
The new structure of the application consists of attaching multiple "Field Links" on various fields from the original set of data. Each of these links references the original field name and a SQL Statement which is to be executed when that field is being imported. Multiple field links can be on one single field, therefore can execute multiple statements, placing the value in various tables, etc. The end goal was an app which I can easily and repeatedly export a common dataset from an original source to any outside source with different data structures, without having to recompile the app.
However the concept of cached updates was not appealing to me, simply for the fact pointed out in the link in RBA's answer that data can be changed in the database in the mean-time. So I will instead integrate my own method of customizable data pushes.

dynamic insertion with xml or stored procedure

I have 20 record group that i need to batch insert them all in one connection so there is two solution (XML or stored procedure). this operation frequently executed so i need fast performance and least overhead
1) I think XML is performs slower but we can freely specify how many record we need to insert as a batch by producing the appropriate XML, I don't know the values of each field in a record, there maybe characters that malformed our XML like using " or filed tags in values so how should i prevent this behavior ?
2) using stored procedure is faster but i need to define all input parameters which is boring task and if i need to increase or decrease the number of records inserted in a batch then i need to change the SP
so which solution is better in my environment with respect to my constrains
XML is likely the better choice, however there are other options
If you're using SQL Server 2008 you can use Table Valued parameters instead.
Starting with .NET 2.0 you had the option to use the SQLBulkCopy
If you're using oracle you can pass a user defined type but I'm not sure what versions of ODP and Oracle that works with.
Note these are all .NET samples. I don't know that this will work for you. It would probably help if you include the database and version and client technology that you're using.

How do SQL parameters work internally?

A coworker and I were browsing SO when we came across a question about SQL Injection, and it got us wondering: how do parametrized queries work internally? Does the API you are using (assuming it supports parametrized queries) perform concatenation, combining the query with the parameters? Or do the parameters make it to the SQL engine separately from the query, and no concatenation is performed at all?
Google hasn't been very helpful, but maybe we haven't searched for the right thing.
The parameters make it to the SQL engine separately from the query. Execution plan calculated or reused for the parametrized query, and then query is executed by sql engine with parameters.
Paramters make it to the SQL server intact, and individually "packaged" with meta data indicating their type, whether Input or Output etc. As Alex Reitbort points out, it is so because the parametrized statements are a server level concept, not merely a convenient way of invoking commands from various connection layers.
I doubt that SQL SERVER builds a complete query string from the given parametrized query where the parameter list is concatenated in.
It most likely parses the given parametrized command string splitting it into an internal data structure based on reserved words and symbols (SELECT, FROM, ",", "+", etc). Within that data structure, there are properties/places for values like table names, literals, etc. It is here that it copies (verbatim) the each passed in parameter (from the list) into the proper section of that structure.
so your #UserName value of: 'x';delete from users --
in not never needs to be escaped, just used as the literal value it really is.
Parameters are passed along with the query (not within the query), and are automatically escaped by the API as they are sent in accordance with the underlying database communications protocol.
For example, you might have
Query: <<<<select * from users where username = :username>>>>
Param: <<<<:username text<<<<' or '1' = '1>>>>>>>>
That's not the exact encoding any database protocol actually uses, but you get the idea.

Resources