libpq - PQsendQuery wait for complete result - c

I'm having problem with libpq's PQexec function hanging on intermittent
connections. After looking around the mailing list, the solution is to use the
asynchronous functions PQsendQuery/PQgetResult and implement your own timeout.
Now the
issue I'm facing is that PQgetResult needs to be called multiple times until
it returns null and then you know it's done. However, the rest of my
application expects a single PQresult object per query.
So my question is:
Is there a way to concatenate/join the multiple PQresults?
Can I somehow use PQisBusy & PQconsumeInput to wait until all the
results are ready before calling PQgetResult?

credits to Laurenz Albe to who answered this over on the postgresql mailing list.
If you have a single SQL statement, you will get only one
PQresult. You get more than one if you send a query string
with more than one statement, e.g.
PQsendQuery(conn, "SELECT 42; SELECT 'Hello'");
would result in two PQresults.
You can get multiple PQresults only using asynchronous
command processing; the corresponding PQexec would return
only the PQresult of the last statement executed.
So you can get the same behaviour as PQexec by discarding
all PQresults except for the last one.

Related

Gmail api messages/list Q after:{timestamp} doe not work properly

Good time!
I'm trying to get the list of message and to filter them I use Q after:{timestamp}
I do the following query
After getting the message id I do a query to get the details of the message:
As you can see timestamp in the query and internalDate of the message are the same.
When I increment timestamp value to 1559717792 and do a query I get the same result:
In my view, the result should be empty because the internalDate less than 1559717792. Is it an issue or is it my mistake?
Thank you!
Gmail API uses the same search syntax as web interface and it's documented here:
https://support.google.com/mail/answer/7190
Specifically it never says "after:<epochSeconds>" works but it only gives an option for a formatted date "after:YYYY/MM/DD". Emperically the <epochSeconds> does seem to work, but it's not documented (so beware that it's not guaranteed to be supported and may break at any time) and also it seems that there may be some rounding issues within the same second (so you may have to add or remove a second to always get the results you want if you need that level of accuracy).

How do IMMUTABLE, STABLE and VOLATILE keywords effect behaviour of function?

We wrote a function get_timestamp() defined as
CREATE OR REPLACE FUNCTION get_timestamp()
RETURNS integer AS
$$
SELECT (FLOOR(EXTRACT(EPOCH FROM clock_timestamp()) * 10) - 13885344000)::int;
$$
LANGUAGE SQL;
This was used on INSERT and UPDATE to enter or edit a value in a created and modified field in the database record. However, we found when adding or updating records consecutively it was returning the same value.
On inspecting the function in pgAdmin III we noted that on running the SQL to build the function the key word IMMUTABLE had been injected after the LANGUAGE SQL statement. The documentation states that the default is VOLATILE (If none of these appear, VOLATILE is the default assumption) so I am not sure why IMMUTABLE was injected, however, changing this to STABLE has solved the issue of repeated values.
NOTE: As stated in the accepted answer, IMMUTABLE is never added to a function by pgAdmin or Postgres and must have been added during development.
I am guessing what was happening was that this function was being evaluated and the result was being cached for optimization, as it was marked IMMUTABLE indicating to the Postgres engine that the return value should not change given the same (empty) parameter list. However, when not used within a trigger, when used directly in the INSERT statement, the function would return a distinct value FIVE times before then returning the same value from then on. Is this due to some optimisation algorithm that says something like "If an IMMUTABLE function is used more that 5 times in a session, cache the result for future calls"?
Any clarification on how these keywords should be used in Postgres functions would be appreciated. Is STABLE the correct option for us given that we use this function in triggers, or is there something more to consider, for example the docs say:
(It is inappropriate for AFTER triggers that wish to query rows
modified by the current command.)
But I am not altogether clear on why.
The key word IMMUTABLE is never added automatically by pgAdmin or Postgres. Whoever created or replaced the function did that.
The correct volatility for the given function is VOLATILE (also the default), not STABLE - or it wouldn't make sense to use clock_timestamp() which is VOLATILE in contrast to now() or CURRENT_TIMESTAMP which are STABLE: those return the same timestamp within the same transaction. The manual:
clock_timestamp() returns the actual current time, and therefore its
value changes even within a single SQL command.
The manual warns that function volatility STABLE ...
is inappropriate for AFTER triggers that wish to query rows modified
by the current command.
.. because repeated evaluation of the trigger function can return different results for the same row. So, not STABLE.
You ask:
Do you have an idea as to why the function returned correctly five
times before sticking on the fifth value when set as IMMUTABLE?
The Postgres Wiki:
With 9.2, the planner will use specific plans regarding to the
parameters sent (the query will be planned at execution), except if
the query is executed several times and the planner decides that the
generic plan is not too much more expensive than the specific plans.
Bold emphasis mine. Doesn't seem to make sense for an IMMUTABLE function without input parameters. But the false label is overridden by the VOLATILE function in the body (voids function inlining): a different query plan can still make sense.
Related:
PostgreSQL Stored Procedure Performance
Aside
trunc() is slightly faster than floor() and does the same here, since positive numbers are guaranteed:
SELECT (trunc(EXTRACT(EPOCH FROM clock_timestamp()) * 10) - 13885344000)::int

ColdFusion 10 error with Stored Procedures

In a .CFC file, within a CFfunction and with CFargument tags.
<cfscript>
var sp=new storedproc();
sp.setDatasource(variables.datasource);
sp.setProcedure("storedProcedure_INSERT");
sp.addParam(cfsqltype="cf_sql_integer",type="in",value=arguments.one);
sp.addParam(cfsqltype="cf_sql_integer",type="in",value=arguments.two);
sp.addParam(cfsqltype="cf_sql_integer",type="in",value=arguments.three);
sp.addParam(cfsqltype="cf_sql_integer",type="in",value=arguments.four);
sp.addProcResult(name="results",resultset=1);
//writeDump(sp);break; //This dump is reached
var spObj=sp.execute(); //blows up here; this is never reached
writeDump(spObj);break; //This is never reached, either.
var spResults=spObj.getProcResultSets().results;
A shiny nickle to anyone who can tell me why the sp.execute() is blowing up with message
"Cannot find results key in structure.
The specified key, results, does not exist in the structure."
I've used this psuedo-code many, may times in the past, and never had it do this. I'm connected to a MSSQL Server 2012 DB, everything's cricket in CF Admin, and other SPs are working properly. The stack trace doesn't even include any of MY code at all o_O
The error occurred in C:/ColdFusion10/cfusion/CustomTags/com/adobe/coldfusion/base.cfc: line 491
Called from C:/ColdFusion10/cfusion/CustomTags/com/adobe/coldfusion/storedproc.cfc: line 142
Called from //hq-devfs/development$/websites/myProject/cfc/mySOAPWSDLs.cfc: line 123
And SO is blowing up if I try and paste anymore of that. Google has...not been helpful ._.
Short answer: The error means you are trying to retrieve a resultset from the stored procedure, when it does not actually return one. A simple solution is to add a SELECT to the end of your procedure, so it returns a resultset containing the data you need. Then your original code will work:
SELECT ##ROWCOUNT AS NumOfRowsAffected;
Longer answer:
The method you are using, addProcResult(), is the equivalent of <cfprocresult>. It is intended to capture a resultset returned from a stored procedure. (Due to CF's poor choice of attribute names, a lot of people think "resultset" means the storedproc "result" structure, but they are two totally different things). A "resultset" is a query object", in CF parlance.
While all four (4) of the primary sql statements return some result, not all of them return a "query object"
Only SELECT statements generate a "query object"
INSERT/UPDATE/DELETE statements simply return the number of rows affected. They do not generate a "query object".
Since your stored procedure performs an INSERT, it does not generate a "query object". Hence the error when you try and grab the non-existent query here:
sp.addProcResult(name="results",resultset=1);
The simple solution is to add a SELECT statement to the end of your stored procedure, so that it does return a query object. Then your code will work as expected.
As an aside, I suspect you were actually trying to grab the "result" structure, but used the wrong method. The equivalent of <cfstoredproc result=".."> is getPrefix(). Though that would not work here anyway. According to the docs, it does not contain the number of rows affected. Probably because stored procedures can execute multiple statements, each one potentially returning a row count, so there is not just a single value to return.

Table creation in sqlite3 isn't working

I have a problem in the creation of a one table in sqlite3. Basically, the code (language c) that I use in the creation is the following:
do{
sprintf(buffer, "CREATE TABLE new_tab AS SELECT * FROM fileslog WHERE file_owner='%s' AND state='%s';", file_owner, state);
rc = sqlite3_prepare(db, buffer, -1, &result, NULL);
}while((rc == SQLITE_BUSY) || (rc == SQLITE_LOCKED));
My problem is that no any table is created when I execute this code. I have printed the rc variable to see the possible errors, but its value is 0 (SQLITE_OK). I don't know that it's happening neither where is the error.
You are only preparing the SQL statement for execution.
To actually execute it, call sqlite3_step.
The steps involved according to the the SQL Statement Object documentation are:
Create the object using sqlite3_prepare_v2() or a related function.
Bind values to host parameters using the sqlite3_bind_*() interfaces.
Run the SQL by calling sqlite3_step() one or more times.
Reset the statement using sqlite3_reset() then go back to step 2. Do this zero or more times.
Destroy the object using sqlite3_finalize().
(above list and links lifted from the official documentation.)
sqlite_prepare_v2() followed by sqlite3_step()s and `sqlite3_finalizer() as suggested by Lasse V. Karlsen is one way to run the SQL.
sqlite3_exec() is a simpler way for CREATE TABLE and other non-SELECT queries where you don't need to get result rows. As a side effect, you can't use variable binding (that can be useful for e.g. UPDATE and DELETE queries).

SSIS: execute first task if condition met else skip to next

I am getting to know SSIS, I apologize if the question is too simple.
I got a set of tasks inside a foreach-loop-container.
The first task needs only to get executed on condition that a certain user variable is not null or empty.
Otherwise, the flow should skip the first task and continue to the second one.
How would I go about realizing this (in detail) ?
Issue 1: There are two ways to interpret your logic: "...a certain user variable is not null or empty":
The (Variable is Not Null) OR the (Variable is Empty).
The (Variable is Not Null) OR the (Variable is Not Empty).
It's all about the object(s?) of the word "not". The differences are subtle but will impact when the first task in the Foreach loop executes. For demonstration purposes, I am assuming you intend #1.
Issue 2: The first task can no longer be first. In order to accomplish what you desire using SSIS inside the BIDS environment, you need to place another task ahead of the task formerly known as "the first task". This is so you can set a Precedence Constraint on the former first task from the new first task.
It is possible to accomplish what you desire by designing your SSIS dynamically from managed code, but I don't think this issue warrants the overhead associated with that design choice.
I like using an empty Sequence Container as an "Anchor" task - a task that exists solely to serve as the starting endpoint of a Precedence Constraint. I heavily document them as such. I don't want anyone deleting the "unnecessary empty container" and roaming the halls for days shaking their heads and repeating "Andy, Andy, Andy..." but I digress.
In the example below, I have two precedence constraints leaving the empty Sequence Container. One goes to the task that may be skipped and the other to the task following the task that can sometimes be skipped. A third precedence constraint is required between the task that can sometimes be skipped and the task following. It is important to note this third precedence constraint must be edited and the Multiple Constraints option set to OR. This allows the task following to execute when either of the mutually exclusive previous paths are taken. By default, this is set to AND and will require both paths to execute. By definition, that will not - cannot - happen with mutually exclusive paths.
I test the value of an SSIS String variable named #MyVar to see if it's Null or Empty. I used the Expression Only Evaluation Option for the constraints leaving the empty Sequence Container. The expressions vary but establish the mutual exclusivity of the expression. My Foreach Loop Container looks like this:
I hope this helps.
:{>
The best thing can be to use the 'Disable Property' in expressions and giving the expression as per the condition. Just search how to use the disable property.
How about a simple solution instead of some of the more complex ones that have already been given. For the task you want to conditionally skip, add an expression to the disabled property. Any expression that produces a true or false result will work, so for the question example you could use:
ISNULL(#[User::MY_VAR]) || #[User::MY_VAR]==""
The only downside is that it may not as visible as some of the other solutions but it is far easier to implement.
I would create a For Loop Container around the task that needs the condition with the following conditions (#iis the loop counter, #foo is your user variable that you want to test):
InitExpression: #i=0
EvalExpression: #i<1 && !ISNULL(#Foo) && #Foo!=""
AssignExpression: #i=#i+1
there is no need to create a "script"
I think the best (and simpler) approach is to add a blank script task inside your loop container before your "first task", drag the green arrow from it to your "first task" (which obviously will become the second) and use the precedence constraint to do the check.
To do that, double click the arrow, select "expression" on the "evaluation operation" and write your expression. After hitting OK the arrow will become blue indicating that it isnt a simple precedence constraint, it has a expression assigned to it.
Hopefully I didn't misunderstand the question but a possible solution can be as written below.
I created a sample ForEach loop. The loop itself is an item enumerator. It enumerates the numbers 1, 2, 3. The acutal value is stored in a variable called LoopVariable.
There is another variable named FirstShouldRun which is a Boolean variable showing the first task in the foreach loop should be runned or not. I set this variable's EvaluateAsExpression property to true, and its expression is (#[User::LoopVariable] % 2) == 0. I would like to demonstrate with this that every second time the first task should be started.
The two tasks do nothing much but display a MessageBox showing the task has been started.
I started the package and first and the third time the first task didn't started. In the second loop the MessageBox (showing "First started") appeared.
After that you should set FirstShouldRun variable as you like.
As I mentioned in my first comment to the OP, this solution is based on the idea of Amos Wood written in another answer.
That's a bit tricky.
You have to create a Script Task and check if your variable is not null in there.
So first you have the script task in which you will have the following code in your Main() function:
public void Main()
{
if (Dts.Variables["User::yourVariable"].Value != null)
{
Dts.TaskResult = (int)ScriptResults.Failure;
}
else
{
Dts.TaskResult = (int)ScriptResults.Success;
}
}
Then you create two connections from your script task, one to the task that needs to be executed when your variable is not null, and one to the next task (or to another script, if you need to check again, if the variable is not null).
Then you right-click on the (green) arrow of your first connection and select "Failure". Right-click the connection to the next task / script and set it to "Completion".
It should then look something like this:
That's it.

Resources