I've got a CakePHP application connecting to a remote MSSQL server through ODBC, but it's not working as planned. Every query dies becasue it is trying to put backticks around identifiers, which is not correct for MSSQL.
As an example, I have a model called Item for a table called items, and when I call
$this->Item->find('all')
it tries to use the query
SELECT `Item`.`id`, `Item`.`name`, `Item`.`descrtiption` FROM `items` AS `Item` WHERE 1
...and I get an error about invalid syntax near ` at line 1.
Is there anyway to prevent this behaviour and remove the backticks? Or else use square brackets like SQL Server seems to like?
I recently took a good look around the odbc driver with the intention of using it against MSSQL 2008 in CakePHP 1.3. Unless you are prepared to put a considerable amount of work in then it's not feasible at present.
Your immediate problem is that you need to override the default quotes with [ and ]. These are set at the top of the dbo_odbc.php file here
var $startQuote = "[";
var $endQuote = "]";
Once you do this the next issue you will run into is the default use of LIMIT, so you'll need to provide your own limiting function copied from dbo_mssq to override
/**
* Returns a limit statement in the correct format for the particular database.
*
* #param integer $limit Limit of results returned
* #param integer $offset Offset from which to start results
* #return string SQL limit/offset statement
*/
function limit($limit, $offset = null) {
if ($limit) {
$rt = '';
if (!strpos(strtolower($limit), 'top') || strpos(strtolower($limit), 'top') === 0) {
$rt = ' TOP';
}
$rt .= ' ' . $limit;
if (is_int($offset) && $offset > 0) {
$rt .= ' OFFSET ' . $offset;
}
return $rt;
}
return null;
}
You'll then run into two problems, neither of which I solved.
In the describe function the odbc_field_type call is not returning a
field type. I'm not sure how critical this is if you describe the fields in the model, but it doesn't sound promising.
More crucially, in the fields function that's used to generate a field list cake works by recursively exploding the . syntax to generate a series of AS aliases. This is fine if you're recursion level is zero, but with deeper recursion you end up with a field list that looks something like 'this.that.other AS this_dot_that.other AS this_dot_that_dot_other', which is invalid MSSQL syntax.
Neither of these are unsolvable, but at this point I decided it was simpler to reload my server and use the MSSQL driver than continue to chase prblems with the ODBC driver, but YMMV
ADDED: This question seems to be getting a bit of attention: so anyone who takes this further could they append their code to this answer - and hopefully we can assemble a solution between us.
why dont you just use the mssql dbo https://github.com/cakephp/cakephp/blob/master/cake/libs/model/datasources/dbo/dbo_mssql.php
Related
I have a Table called Sanad in Microsoft SQL Server which has two columns bedeh and bestan. I would like to fetch all codes and sum of its bedeh and bestan in Laravel provided that the sum of its bedeh is higher than that of bestan. In addition, the data type of both bedeh and bestan is money. Here is the code I have written for this purpose in Laravel 6:
DB::table('t1')
->fromSub(function ($query){
return $query->selectRaw('Code, sum(bedeh) bed, sum(bestan) bes')
->from('Sanad')
->groupBy('Code');
}, 't1')
->where('bed', '>', 'bes')
->get();
However, when the code is executed, I come across to the problem below:
[Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Cannot convert a char value to money. The char value has incorrect syntax. (SQL: select * from (select Code, sum(bedeh) bed, sum(bestan) bes
What is interesting is that when I copy the resulted query and run it in the Management Studio, it fetches the codes flawlessly. Does anyone know what the problem is?
Thanks,
Habib
Did you try to cast the values in your query i.e.
sum(isnull(cast(bedeh as float),0)) as bed,
sum(isnull(cast(bestanh as float),0)) as bes
I finally changed my approach: instead of writing a query for the whole process, I divided it into different parts each of which works with Eloquent. To be specific, I selected Code, sum(bedeh), and sum(bestan) by this code:
$codes = Sanad::selectRaw('Code, sum(bedeh) bed, sum(bestan) bes')
->groupBy('Code')
->get();
Then, I fetched rows with the higher sum of bedeh by laravel instead of SQL like this:
$debtors = array();
foreach ($codes as $c) {
if($c->bes < $c->bed) {
$debtors[] = $c->Code;
}
}
Now, we can use another select to get Eloquent instances like this:
Person::select(....)->whereIn('Code', $debtors)....->get();
The performance may decrease in this scenario; however, I can benefit from Eloquent facilities such as pagination.
Habib
I am connecting to a universe database (from rocket software) using their .net driver. I would like to fetch data on demand on user request per page i.e. do pagination. With other databases we could use (offset fetch) but universe db doesn't seem to support it. It does not recognize keyword offset, something like
SELECT NAME, AGE FROM CONTACTS WHERE AGE > 25 offset 5 sample 5 does not work. I does not recognize those keywords and there is no good documentation :-(
Note: Although it is traditionally a multi-value database, the one I am using does not use multi value types but the structure is normalized.
This is certainly one of the shortcomings of this platform. I have worked through this in the past with the something similar to the following subroutine. I had to remove a bunch of stuff for brevity but this compiles so it must work completely bug free, right?
Caveats: You need to have #SELECT DICT item in each file you want to use this with containing all of the columns you want to return.
Multivalues get a little tricky. I had flattened the data I was using this with so I did not run into that problem, but this does not do UNNESTs.
Also you might want to add a value saying how many records there are total and possibly work out some kind of token passing and list saving to cut down on executing the query each time you run it but that gets much, much deeper than the basic question at hand.
SUBROUTINE SQLSelectWithOffset(TableName,UVWithClause,Starting,Offset)
***********************************************************************
* PROGRAM ID: SQLSelectWithOffset
*
* PROGRAM TITLE: SQLSelectWithOffset
*
* DESCRIPTION: Universe doesn't support sql commands using starting and offset
* which makes life hard when you want all of a file
* but you choke on the size. Tokens allow for the selectlist to be saved
* TableName = UV FIle to select on. If this is blank program will return the number of records remaining
* UVWithClause = Your critera, WITH or BY criteria you want in a sort select.
* Starting = Holds you place in line
* Offest = How many records to return
************************************************************************
$INCLUDE UNIVERSE.INCLUDE ODBC.H
RETURN.LIST = ""
IF Starting = "" or Starting < 1 THEN
Starting = 1
END
GOSUB GET.MASTER.LIST
FOR X=Starting TO Offset
ID = EXTRACT(FULL.LIST,X,0,0)
IF ID = "" THEN CONTINUE
RETURN.LIST<-1> = ID
NEXT X
SELECT RETURN.LIST TO 9
SQLSTMT ="SELECT * FROM ":TableName:" SLIST 9"
ST=SQLExecDirect(#HSTMT, SQLSTMT)
RETURN
GET.MASTER.LIST:
STMT = "SSELECT ":TableName
IF UVWithClause NE "" THEN
STMT := " ":UVWithClause
END
EXECUTE "CLEARSELECT"
EXECUTE STMT
READLIST FULL.LIST ELSE FULL.LIST = ""
RETURN
END
Good luck, please only use this information for good!
I developing web application with play framework 2.3.8 and scala, with complex architecture on backend and front-end side. As backend we use MS SQL, with many stored procedures, and called it by anorm. And here one of the problems.
I need to update some fields in database. The front end calls play framework, and recive name of the field, and value. Then I parse, field name, and then I need to generate SQL Query for update field. I need assign null, for all parameters, except recived parameter. I try to do it like that:
def updateCensusPaperXX(name: String, value: String, user: User) = {
DB.withConnection { implicit c =>
try {
var sqlstring = "Execute [ScXX].[updateCensusPaperXX] {login}, {domain}"
val params = List(
"fieldName1",
"fieldName2",
...,
"fieldNameXX"
)
for (p <- params){
sqlstring += ", "
if (name.endsWith(p))
sqlstring += value
else
sqlstring += "null"
}
SQL(sqlstring)
.on(
"login" -> user.login,
"domain" -> user.domain,
).execute()
} catch {
case e: Throwable => Logger.error("update CensusPaper04 error", e)
}
}
}
But actually that doesn't work in all cases. For example, when I try to save string, it give's me an error like:
com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near 'some phrase'
What is the best way to generate sql query using anorm with all nulls except one?
The reason this is happening is because when you write the string value directly into the SQL statement, it needs to be quoted. One way to solve this would be to determine which of the fields are strings and add conditional logic to determine whether to quote the value. This is probably not the best way to go about it. As a general rule, you should be using named parameters rather than building a string to with the parameter values. This has a few of benefits:
It will probably be easier for you to diagnose issues because you will get more sensible error messages back at runtime.
It protects against the possibility of SQL injection.
You get the usual performance benefit of reusing the prepared statement although this might not amount to much in the case of stored procedure invocation.
What this means is that you should treat your list of fields as named parameters as you do with user and domain. This can be accomplished with some minor changes to your code above. First, you can build your SQL statement as follows:
val params = List(
"fieldName1",
"fieldName2",
...,
"fieldNameXX"
)
val sqlString = "Execute [ScXX].[updateCensusPaperXX] {login}, {domain}," +
params.map("{" + _ + "}").mkString{","}
What is happening above is that you don't need to insert the values directly, so you can just build the string by adding the list of parameters to the end of your query string.
Then you can go ahead and start building your parameter list. Note, the parameters to the on method of SQL is a vararg list of NamedParameter. Basically, we need to create Seq of NamedParameters that covers "login", "domain" and the list of fields you are populating. Something like the following should work:
val userDomainParams: Seq[NamedParameter] = (("login",user.login),("domain",user.domain))
val additionalParams = params.map(p =>
if (name.endsWith(p))
NamedParameter(p, value)
else
NamedParameter(p, None)
).toSeq
val fullParams = userDomainParams ++ additionalParams
// At this point you can execute as follows
SQL(sqlString).on(fullParams:_*).execute()
What is happening here is that you building the list of parameters and then using the splat operator :_* to expand the sequence into the varargs needed as arguments for the on method. Note that the None used in the NamedParameter above is converted into a jdbc NULL by Anorm.
This takes care of the issue related to strings because you are no longer writing the string directly into the query and it has the added benefit eliminating other issues related with writing the SQL string rather than using parameters.
I am using MS SQL server 2008 with Hibernate. the question I have is how Hibernate implements setMaxResults
Take the following simple scenario.
If I have a query that returns 100 rows and if I pass 1 to setMaxResults, will this affect the returned result from the SQL server itself(as if running a select top 1 statement) or does Hibernate get all the results first (all 100 rows in this case) and pick the top one itself?
Reason I am asking is that it would have a huge performance issue when the number of rows starts to grow.
Thank you.
Hibernate will generate a limit-type query, for all dialects which supports limit query. As the SQLServerDialect supports this (see org.hibernate.dialect.SQLServerDialect.supportsLimit(), and .getLimitString()), you will get a select top 1-query.
If you would like to be absolutly sure, you may turn on debug-logging, or enable the showSql-option and test.
May be following snippet will help. Assume we have a managed Bean class EmpBean and we want only first 5 records. So following is the code
public List<EmpBean> getData()
{
Session session = null;
try
{
session = HibernateUtil.getSession();
Query qry = session.createQuery("FROM EmpBean");
qry.setMaxResults(5);
return qry.list();
}
catch(HibernateException e)
{}
finally
{
HibernateUtil.closeSession(session);
}
return null;
}
Here getSession and closeSession are static utility methods which will take care of creating and closing session
I have a queries that reside in multiple methods each (query) of which can contain multiple parameters. I am trying to reduce file size and line count to make it more maintainable. Below is such an occurrence:
$sql_update = qq { UPDATE database.table
SET column = 'UPDATE!'
WHERE id = ?
};
$sth_update = $dbh->prepare($sql_update);
if ($dbh->err) {
my $error = "Could not prepare statement. Error: ". $dbh->errstr ." Exiting at line " . __LINE__;
print "$error\n";
die;
}
$sth_rnupdate->execute($parameter);
if ($dbh->err) {
my $error = "Could not execute statement. Error: ". $dbh->errstr ." Exiting at line " . __LINE__;
print "$error\n";
die;
}
This is just one example, however, there are various other select examples that contain just the one parameter to be passed in, however there is also some with two or more parameters. I guess I am just wondering would it be possible to encapsulate this all into a function/method, pass in an array of parameters, how would the parameters be populated into the execute() function?
If this was possible I could write a method that you simply just pass in the SQL query and parameters and get back a reference to the fetched records. Does this sound safe at all?
If line-count and maintainable code is your only goal, your best bet would be to use any one of the several fine ORM frameworks/libraries available. Class::DBI and DBIx::Class are two fine starting points. Just in case, you are worried about spending additional time to learn these modules - dont: It took me just one afternoon to get started and productive. Using Class::DBI for example your example is just one line:
Table->retrieve(id => $parameter)->column('UPDATE!')->update;
The only down-side (if that) of these frameworks is that very complicated SQL statements required writing custom methods learning which may take you some additional time (not too much) to get around.
No sense in checking for errors after every single database call. How tedious!
Instead, when you connect to the database, set the RaiseError option to true. Then if a database error occurs, an exception will be thrown. If you do not catch it (in an eval{} block), your program will die with a message, similar to what you were doing manually above.
The "execute" function does accept an array containing all your parameters.
You just have to find a way to indicate which statement handle you want to execute and you're done ...
It would be much better to keep your statement handles somewhere because if you create a new one each time and prepare it each time you don't really rip the benefits of "prepare" ...
About returning all rows you can do that ( something like "while fetchrow_hashref push" ) be beware of large result sets that coudl eat all your memory !
Here's a simple approach using closures/anonymous subs stored in a hash by keyword name (compiles, but not tested otherwise), edited to include use of RaiseError:
# define cached SQL in hash, to access by keyword
#
sub genCachedSQL {
my $dbh = shift;
my $sqls = shift; # hashref for keyword => sql query
my %SQL_CACHE;
while (my($name,$sql) = each %$sqls) {
my $sth = $dbh->prepare($sql);
$SQL_CACHE{$name}->{sth} = $sth;
$SQL_CACHE{$name}->{exec} = sub { # closure for execute(s)
my #parameters = #_;
$SQL_CACHE{$name}->{sth}->execute(#parameters);
return sub { # closure for resultset iterator - check for undef
my $row; eval { $row = $SQL_CACHE{$name}->{sth}->fetchrow_arrayref(); };
return $row;
} # end resultset closure
} # end exec closure
} # end while each %$sqls
return \%SQL_CACHE;
} # end genCachedSQL
my $dbh = DBI->connect('dbi:...', { RaiseError => 1 });
# initialize cached SQL statements
#
my $sqlrun = genCachedSQL($dbh,
{'insert_table1' => qq{ INSERT INTO database.table1 (id, column) VALUES (?,?) },
'update_table1' => qq{ UPDATE database.table1 SET column = 'UPDATE!' WHERE id = ? },
'select_table1' => qq{ SELECT column FROM database.table1 WHERE id = ? }});
# use cached SQL
#
my $colid1 = 1;
$sqlrun->{'insert_table1'}->{exec}->($colid1,"ORIGINAL");
$sqlrun->{'update_table1'}->{exec}->($colid1);
my $result = $sqlrun->{'select_table1'}->{exec}->($colid1);
print join("\t", #$_),"\n" while(&$result());
my $colid2 = 2;
$sqlrun->{'insert_table1'}->{exec}->($colid2,"ORIGINAL");
# ...
I'm very impressed with bubaker's example of using a closure for this.
Just the same, if the original goal was to make the code-base smaller and more maintainable, I can't help thinking there's a lot of noise begging to be removed from the original code, before anyone embarks on a conversion to CDBI or DBIC etc (notwithstanding the great libraries they both are.)
If the $dbh had been instantiated with RaiseError set in the attributes, most of that code goes away:
$sql_update = qq { UPDATE database.table
SET column = 'UPDATE!'
WHERE id = ?
};
$sth_update = $dbh->prepare($sql_update);
$sth_update->execute($parameter);
I can't see that the error handling in the original code is adding much that you wouldn't get from the vanilla die produced by RaiseError, but if it's important, have a look at the HandleError attribute in the DBI manpage.
Furthermore, if such statements aren't being reused (which is often the main purpose of preparing them, to cache how they're optimised; the other reason is to mitigate against SQL injection by using placeholders), then why not use do?
$dbh->do($sql_update, \%attrs, #parameters);