Bind variables in R DBI - database

In R's DBI package, I'm not finding a facility for using bound variables. I did find a document (the original vignette from 2002) that says about bound variables, "Perhaps the DBI could at some point in the future implement this feature", but it looks like so far that's left undone.
What do people in R use for a substitute? Just concatenate strings right into the SQL? That's got some obvious problems for safety & performance.
EDIT:
Here's an example of how placeholders could work:
query <- "SELECT numlegs FROM animals WHERE color=?"
result <- dbGetQuery(caseinfo, query, bind="green")
That's not a very well-thought-out interface, but the idea is that you can use a value for bind and the driver handles the details of escaping (if the underlying API doesn't handle bound variables natively) without the caller having to reimplement it [badly].

For anyone coming to this question like I just did after googling for rsqlite and dbgetpreparedquery, it seems that in the latest version of rsqlite you can run a SELECT query with bind variables. I just ran the following:
query <- "SELECT probe_type,next_base,color_channel FROM probes WHERE probeid=?"
probe.types.df <- dbGetPreparedQuery(con,que,bind.data=data.frame(probeids=ids))
This was relatively fast (selecting 2,000 rows out of a 450,000 row table) and is incredibly useful.
FYI.

Below is a summary of what's currently supported in RSQLite for bound
parameters. You are right that there is not currently support for
SELECT, but there is no good reason for this and I would like to add
support for it.
If you feel like hacking, you can get a read-only checkout of all of
the DBI related packages here:
use --user=readonly --password=readonly
https://hedgehog.fhcrc.org/compbio/r-dbi/trunk
https://hedgehog.fhcrc.org/compbio/r-dbi/trunk/DBI
https://hedgehog.fhcrc.org/compbio/r-dbi/trunk/SQLite/RSQLite
I like to receive patches, especially if they include tests and
documentation. Unified diff, please. I actually do all my
development using git and so best case is to create a git clone of say
RSQLite and then send me diffs as git format-patch -n
git-svn..
Anyhow, here are some examples:
library("RSQLite")
make_data <- function(n)
{
alpha <- c(letters, as.character(0:9))
make_key <- function(n)
{
paste(sample(alpha, n, replace = TRUE), collapse = "")
}
keys <- sapply(sample(1:5, replace=TRUE), function(x) make_key(x))
counts <- sample(seq_len(1e4), n, replace = TRUE)
data.frame(key = keys, count = counts, stringsAsFactors = FALSE)
}
key_counts <- make_data(100)
db <- dbConnect(SQLite(), dbname = ":memory:")
sql <- "
create table keys (key text, count integer)
"
dbGetQuery(db, sql)
bulk_insert <- function(sql, key_counts)
{
dbBeginTransaction(db)
dbGetPreparedQuery(db, sql, bind.data = key_counts)
dbCommit(db)
dbGetQuery(db, "select count(*) from keys")[[1]]
}
## for all styles, you can have up to 999 parameters
## anonymous
sql <- "insert into keys values (?, ?)"
bulk_insert(sql, key_counts)
## named w/ :, $, #
## names are matched against column names of bind.data
sql <- "insert into keys values (:key, :count)"
bulk_insert(sql, key_counts[ , 2:1])
sql <- "insert into keys values ($key, $count)"
bulk_insert(sql, key_counts)
sql <- "insert into keys values (#key, #count)"
bulk_insert(sql, key_counts)
## indexed (NOT CURRENTLY SUPPORTED)
## sql <- "insert into keys values (?1, ?2)"
## bulk_insert(sql)

Hey hey - I just discovered that RSQLite, which is what I'm using in this case, does indeed have bound-variable support:
http://cran.r-project.org/web/packages/RSQLite/NEWS
See the entry about dbSendPreparedQuery() and dbGetPreparedQuery().
So in theory, that turns this nastiness:
df <- data.frame()
for (x in data$guid) {
query <- paste("SELECT uuid, cites, score FROM mytab WHERE uuid='",
x, "'", sep="")
df <- rbind(df, dbGetQuery(con, query))
}
into this:
df <- dbGetPreparedQuery(
con, "SELECT uuid, cites, score FROM mytab WHERE uuid=:guid", data)
Unfortunately, when I actually try it, it seems that it's only for INSERT statements and the like, not for SELECT statements, because I get an error: RS-DBI driver: (cannot have bound parameters on a SELECT statement).
Providing that capability would be fantastic.
The next step would be to hoist this up into DBI itself so that all DBs can take advantage of it, and provide a default implementation that just pastes it into the string like we're all doing ourselves now.

Related

How to select from an array column without getting malformed array errors

I've spent a good hour or so googling and trying various combinations but without success.
I wish to select from a table where one of the columns is single dimensional array of varchar(255).
In normal SQL I use the following query:
SELECT * FROM customers WHERE email_domains #> '{"#google.com"}';
That works perfectly.
But now I want to do the same from code. So I have tried this:
domain = '#google.com'
sql = "SELECT * FROM customers WHERE email_domains #> '{%s}';"
cursor.execute( sql, [domain] )
result = cursor.fetchall()
and a whole load of various combinations of escaped ' and " but I cannot get it to work.
The error I get is this:
ERROR: malformed array literal: "{"
LINE 1: ... * FROM customers WHERE email_domains #> '{'#goo....
^
DETAIL: Unexpected end of input.
All help appreciated :)
Ok. At times I amaze myself with my stupidity...
The solution (for me) was to simply restructure my sql to this:
SELECT * FROM customers WHERE %s = ANY(email_domains);
Thanks to everyone who offered suggestions.
psycopg2 converts lists to arrays for you. I can't test this easily at the moment, but I believe this should work:
domain = ['#google.com']
# Let psycopg2 do the escaping for you, don't put quotes in there
sql = "SELECT * FROM customers WHERE email_domains #> %s;"
cursor.execute( sql, [domain] )
result = cursor.fetchall()

R Converting SQL Server Query with Geometry Datatype to spatialpolygonsdataframe

I am trying to plot geometry (binary) polygon data from an SQL Server data source. What I want to do is use the Geometry Data Type from the SQL query for the polygons, and also the rest of the columns in the query as the #data attribute table within the SpatialPolygonsDataFrame class.
This is my code so far, to get the SQL query data into a simple data.frame and convert the binary datatype using wkb::readWKB().
From this stage, I do not know how to create the SpatialPolygonsDataFrame dataframe.
library(RODBC)
library(maptools)
library(rgdal)
library(ggplot2)
dbhandle <- odbcDriverConnect("connection string",rows_at_time = 1)
sqlStatement <- "SELECT ID
, shape.STAsBinary() as shape
, meshblock_number
, areaunit_code
, dpz_code
, catchment_id
FROM [primary_parcels] hp "
sqlStatement <- gsub("[\r\n]", "", sqlStatement)
parcelData <- sqlQuery(dbhandle,sqlStatement )
odbcClose(dbhandle)
parcelData$shape <- wkb::readWKB(parcelData$shape)
This might be too late, however, with the help of a friend and this I found a solution for that. It seems a bit funny, but I am also working on a similar set of data and had the same problem. Please remember, it is difficult to replicate your approach as you did not provide a replicable example. You also need to adjust the projection. Please note that it is easier to build the connection in odbc once and call it each time, rather than writing the connection each time.
library(rgeos)
library(mapview)
library(raster)
library(dplyr)
library(sp)
#You may ignore this
odbcCh<-odbcConnect("Rtest")
sqlStatement=sqlQuery(odbcCh, 'SELECT ID , shape.STAsBinary() as shape, meshblock_number , areaunit_code, dpz_code, catchment_id FROM [primary_parcels] hp')
parcelData <- sqlQuery(odbcCh,sqlStatement )
#untill here
things <- vector("list", 1)
z = 0
for(line in parcelData$shape)
{
{
things[[z+1]]<-readWKT(line)
}
z = z + 1
}
Things <- do.call(bind,things)
Things.df= SpatialPolygonsDataFrame(Things,data.frame(parcelData$ID,parcelData$catchment_id))
plot(Things.df)
#you may not need the rest
Things.df#proj4string= CRS("+proj=nzmg +lat=-41.0 +lon=173.0 +x_0=2510000.0 +y_0=6023150.0 +ellps=intl +units=m")
mapview(Things.df)

R RODBCext and Parameterizing IN statement?

I've been working to parameterize a SQL Statement that uses the IN statement in the WHERE clause. I'm using rodbcext library for parameterizing but it seems to lack expansion of a list.
I was hoping to write code such as
sqlExecute("SELECT * FROM table WHERE name IN (?)", c("paul","ringo","john", "george")
I'm using the following code but wondered if there's an easier way.
library(RODBC)
library(RODBCext)
# Search inputs
names <- c("paul", "ringo", "john", "george")
# Build SQL statement
qmarks <- replicate(length(names), "?")
stringmarks <- paste(qmarks, collapse = ",")
sql <- paste("SELECT * FROM tableA WHERE name IN (", stringmarks, ")")
# expand to Columns - seems to be the magic step required
bindnames <- rbind(names)
# Execute SQL statement
dbhandle <- RODBC::odbcDriverConnect(connectionString)
result <- RODBCext::sqlExecute(dbhandle, sql, bindnames, fetch = TRUE)
RODBC::odbcClose(dbhandle)
It works but feel I'm using R to expand the strings in the wrong way (bit new to R - so many ways to do the same thing wrong). Somebody will probably say "that creates factors - never do that" :-)
I found this article which suggest I'm on the right track but it doesn't discuss having to expand the "?" and turn the list into columns of a data.frame
R RODBC putting list of numbers into an IN() statement
Thank you.
UPDATE: As Benjamin shows below - the sqlExecute function can handle a list() of inputs. However upon inspection of the resulting SQL I discovered that it uses cursors to rollup the results. This significantly increases the CPU and I/O over the sample code I show above.
While the library can indeed solve this for you, for large results it may be too expensive. There are two answers and it depends upon your needs.
Since your only parameter in the query is in collection for IN, you could get away with
sqlExecute(dbhandle,
"SELECT * FROM table WHERE name IN (?)",
list(c("paul","ringo","john", "george")),
fetch = TRUE)
sqlExecute will bind the values in the list to the question mark. Here, it will actually repeat the query four times, once for each value in the vector. It may seem kind of silly to do it this way, but when trying to pass strings, it's a lot safer in many ways to let the binding take care of setting up the appropriate quote structure rather than trying to paste it in yourself. You will generate fewer errors this way and avoid a lot of database security concerns.
What if you declare a variable table in a character object and then concatenate with the query.
library(RODBC)
library(RODBCext)
# Search inputs
names <- c("paul", "ringo", "john", "george")
# Build SQL statement
sql_top <- paste0( "SET NOCOUNT ON \r\n DECLARE #LST_NAMES TABLE (ID NVARCHAR(20)) \r\n INSERT INTO #LST_NAMES VALUES ('", paste(names, collapse = "'), ('" ) , "')")
sql_body <- paste("SELECT * FROM tableA WHERE name IN (SELECT id FROM #LST_NAMES)")
sql <- paste0(sql_top, "\r\n", sql_body)
# Execute SQL statement
dbhandle <- RODBC::odbcDriverConnect(connectionString)
result <- RODBCext::sqlExecute(dbhandle, sql, bindnames, fetch = TRUE)
RODBC::odbcClose(dbhandle)
The query will be (the set no count on is important to retrieve the results)
SET NOCOUNT ON
DECLARE #LST_NAMES TABLE (ID NVARCHAR(20))
INSERT INTO #LST_NAMES VALUES ('paul'), ('ringo'), ('john'), ('george')
SELECT * FROM tableA WHERE name IN (SELECT id FROM #LST_NAMES)

RODBC::sqlSave - problems creating/appending to a table

Related to several other questions on the RODBC package, I'm having problems using RODBC::sqlSave to write to a table on a SQL Server database. I'm using MS SQL Server 2008 and 64-bit R on a Windows RDP.
The solution in the 3rd link (questions) does work [sqlSave(ch, df)]. But in this case, it writes to the wrong data base. That is, my default DB is "C2G" but I want to write to "BI_Sandbox". And it doesn't allow for options such as rownames, etc. So there still seems to be a problem in the package.
Obviously, a possible solution would be to change my ODBC solution to the specified database, but it seems there should be a better method. And this wouldn't solve the problem of unusable parameters in the sqlSave command--such as rownames, varTypes, etc.
I have the following ODBC- System DSN connnection:
Microsoft SQL Server Native Client Version 11.00.3000
Data Source Name: c2g
Data Source Description: c2g
Server: DC01-WIN-SQLEDW\BISQL01,29537
Use Integrated Security: Yes
Database: C2G
Language: (Default)
Data Encryption: No
Trust Server Certificate: No
Multiple Active Result Sets(MARS): No
Mirror Server:
Translate Character Data: Yes
Log Long Running Queries: No
Log Driver Statistics: No
Use Regional Settings: No
Use ANSI Quoted Identifiers: Yes
Use ANSI Null, Paddings and Warnings: Yes
R code:
R> ch <- odbcConnect("c2g")
R> sqlSave(ch, zinq_scores, tablename = "[bi_sandbox].[dbo].[table1]",
append= FALSE, rownames= FALSE, colnames= FALSE)
Error in sqlColumns(channel, tablename) :
‘[bi_sandbox].[dbo].[table1]’: table not found on channel
# after error, try again:
R> sqlDrop(ch, "[bi_sandbox].[dbo].[table1]", errors = FALSE)
R> sqlSave(ch, zinq_scores, tablename = "[bi_sandbox].[dbo].[table1]",
append= FALSE, rownames= FALSE, colnames= FALSE)
Error in sqlSave(ch, zinq_scores, tablename = "[bi_sandbox].[dbo].[table1]", :
42S01 2714 [Microsoft][SQL Server Native Client 11.0][SQL Server]There is already an object named 'table1' in the database.
[RODBC] ERROR: Could not SQLExecDirect 'CREATE TABLE [bi_sandbox].[dbo].[table1] ("credibility_review" float, "creditbuilder" float, "no_product" float, "duns" varchar(255), "pos_credrev" varchar(5), "pos_credbuild" varchar(5))'
In the past, I've gotten around this by running the supremely inefficient sqlQuery with insert into row-by-row to get around this. But I tried this time and no data was written. Although the sqlQuery statement did not have an error or warning message.
temp <-"INSERT INTO [bi_sandbox].[dbo].[table1]
+ (credibility_review, creditbuilder, no_product, duns, pos_credrev, pos_credbuild) VALUES ("
>
> for(i in 1:nrow(zinq_scores)) {
+ sqlQuery(ch, paste(temp, "'", zinq_scores[i, 1], "'",",", " ",
+ "'", zinq_scores[i, 2], "'", ",",
+ "'", zinq_scores[i, 3], "'", ",",
+ "'", zinq_scores[i, 4], "'", ",",
+ "'", zinq_scores[i, 5], "'", ",",
+ "'", zinq_scores[i, 6], "'", ")"))
+ }
> str(sqlQuery(ch, "select * from [bi_sandbox].[dbo].[table1]"))
'data.frame': 0 obs. of 6 variables:
$ credibility_review: chr
$ creditbuilder : chr
$ no_product : chr
$ duns : chr
$ pos_credrev : chr
$ pos_credbuild : chr
Any help would be greatly appreciated.
Also, if there is any missing detail, please let me know and I'll edit the question.
My apologies up front. This is not exactly a "simple example." It's pretty trivial, but there are a lot of parts. And by the end, you'll probably think I'm crazy for doing it this way.
Starting in SQL Server Management Studio
First, I've created a database on SQL Server called mtcars with default schema dbo. I've also added myself as a user. Under my own user name, I am the database owner, so I can do anything I want to the database, but from R, I will connect using a generic account that only has EXECUTE privileges.
The predefined table in the database that we are going to write to is called mtcars. (So the full path to the table is mtcars.dbo.mtcars; it's lazy, I know). The code to define the table is
USE [mtcars]
GO
/****** Object: Table [dbo].[mtcars] Script Date: 2/22/2016 11:56:53 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[mtcars](
[OID] [int] IDENTITY(1,1) NOT NULL,
[mpg] [numeric](18, 0) NULL,
[cyl] [numeric](18, 0) NULL,
[disp] [numeric](18, 0) NULL,
[hp] [numeric](18, 0) NULL
) ON [PRIMARY]
GO
Stored Procedures
I'm going to use two stored procedures. The first is an "UPSERT" procedure, that will first try to update a row in a table. If that fails, it will insert the row into the table.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE dbo.sample_procedure
#OID int = 0,
#mpg numeric(18,0) = 0,
#cyl numeric(18,0) = 0,
#disp numeric(18,0) = 0,
#hp numeric(18,0) = 0
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- TRANSACTION code borrowed from
-- http://stackoverflow.com/a/21209131/1017276
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
UPDATE dbo.mtcars
SET mpg = #mpg,
cyl = #cyl,
disp = #disp,
hp = #hp
WHERE OID = #OID;
IF ##ROWCOUNT = 0
BEGIN
INSERT dbo.mtcars (mpg, cyl, disp, hp)
VALUES (#mpg, #cyl, #disp, #hp)
END
COMMIT TRANSACTION;
END
GO
Another stored procedure I will use is just the equivalent of RODBC::sqlFetch. As far as I can tell, sqlFetch depends on SQL injection, and I'm not allowed to use it. Just to be on the safe side of our data security policies, I write little procedures like this (Data security is pretty tight here, you may or may not need this)
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE dbo.get_mtcars
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
SELECT * FROM dbo.mtcars
END
GO
Now, from R
I have a utility function I use to help me manage inputting data into the stored procedures. sqlSave would do a lot of this automatically, so I'm kind of reinventing the wheel. The gist of the utility function is to determine if the value I'm pushing to the database needs to be nested in quotes or not.
#* Utility function. This does a couple helpful things like
#* Convert NA and NULL into a SQL NULL
#* wrap character strings and dates in single quotes
sqlNullString <- function(value, numeric=FALSE)
{
if (is.null(value)) value <- "NULL"
if (is.na(value)) value <- "NULL"
if (inherits(value, "Date")) value <- format(x = value, format = "%Y-%m-%d")
if (value == "NULL") return(value)
else if (numeric) return(value)
else return(paste0("'", value, "'"))
}
This next step isn't strictly necessary, but I'm going to do it just so that my R table is similar to my SQL table. This is organizational strategy on my part.
mtcars$OID <- NA
Now let's establish our connection:
server <- "[server_name]"
uid <- "[generic_user_name]"
pwd <- "[password]"
library(RODBC)
channel <- odbcDriverConnect(paste0("driver=SQL Server;",
"server=", server, ";",
"database=mtcars;",
"uid=", uid, ";",
"pwd=", pwd))
Now this next part is pure laziness. I'm going to use a for loop to push each row of the data frame the to SQL table one at a time. As noted in the original question, this is kind of inefficient. I'm sure I could write a stored procedure to accept several vectors of data, compile them into a temporary table, and do the UPSERT in SQL, but I don't work with large data sets when I'm doing this, and so it hasn't yet been worth it to me to write such a procedure. Instead, I prefer to stick with the code that is a little easier for me to reason with on my limited SQL skills.
Here, we're just going to push the first 5 rows of mtcars
#* Insert the first 5 rows into the SQL Table
for (i in 1:5)
{
sqlQuery(channel = channel,
query = paste0("EXECUTE dbo.sample_procedure ",
"#OID = ", sqlNullString(mtcars$OID[i]), ", ",
"#mpg = ", mtcars$mpg[i], ", ",
"#cyl = ", mtcars$cyl[i], ", ",
"#disp = ", mtcars$disp[i], ", ",
"#hp = ", mtcars$hp[i]))
}
And now we'll take a look at the table from SQL
sqlQuery(channel = channel,
query = "EXECUTE dbo.get_mtcars")
This next line is just to match up the OIDs in R and SQL for illustration purposes. Normally, I would do this manually.
mtcars$OID[1:5] <- 1:5
This next for loop will UPSERT all 32 rows. We already have 5, we're UPSERTing 32, and the SQL table at the end should have 32 if we've done it correctly. (That is, SQL will recognize the 5 rows that already exist)
#* Update/Insert (UPSERT) the entire table
for (i in 1:nrow(mtcars))
{
sqlQuery(channel = channel,
query = paste0("EXECUTE dbo.sample_procedure ",
"#OID = ", sqlNullString(mtcars$OID[i]), ", ",
"#mpg = ", mtcars$mpg[i], ", ",
"#cyl = ", mtcars$cyl[i], ", ",
"#disp = ", mtcars$disp[i], ", ",
"#hp = ", mtcars$hp[i]))
}
#* Notice that the first 5 rows were unchanged (though they would have changed
#* if we had changed the data...the point being that the stored procedure
#* correctly identified that these records already existed)
sqlQuery(channel = channel,
query = "EXECUTE dbo.get_mtcars")
Recap
The stored procedure approach has a major disadvantage in that it is blatantly reinventing the wheel. It also requires that you learn SQL. SQL is pretty easy to learn for simple tasks, but some of the code I've written for more complex tasks is pretty difficult to interpret. Some of my procedures have taken me the better part of a day to get right. (once they are done, however, they work incredibly well)
The other big disadvantage to the stored procedure is, I've noticed, it does require a little bit more code work and organization. I'd say it's probably been about 10% more code work and documentation than if I were just using SQL Injection.
The chief advantages of the stored procedures approach are
you have massive flexibility for what you want to do
You can store your SQL code into the database and not pollute your R code with potentially huge strings of SQL code
Avoiding SQL injection (again, this is a data security thing, and may not be an issue depending on your employer's policies. I'm strictly forbidden from using SQL injection, so stored procedures are my only option)
It should also be noted that I've not yet explored using Table-Valued parameters in my stored procedures, which might simplify things for me a bit.
In the past, I've gotten around this by running the supremely inefficient sqlQuery with insert into row-by-row to get around this. But I tried this time and no data was written. Although the sqlQuery statement did not have an error or warning message.
Faced it yesterday: in my case the issue was in scheme. The table was actually created but in my user own scheme.
First time you can create it and than you have this error (that object already exists)
After the investigation I found that some packages does not work correctly with schemes.
In the end I used "insert by line" solution. The solution is available here and here

Accessing SQLite Database in R

I want to access and manipulate a large data set in R. Since it's a large CSV file (~ 0.5 GB), I plan to import it
to SQLite and then access it from R. I know the sqldf and RSQLite packages can do this but I went
over their manuals and they are not helpful. Being a newbie to SQL doesn't help either.
I want to know do I have to set the R directory to SQLite's and then go from there? How do I read in the database in R then?
Heck, if you know how to access the DB from R without using SQL, please tell me.
Thanks!
It really is rather easy -- the path and filename to the sqlite db file is passed as the 'database' parameter. Here is CRANberries does:
databasefile <- "/home/edd/cranberries/cranberries.sqlite"
## ...
## main worker function
dailyUpdate <- function() {
stopifnot(all.equal(system("fping cran.r-project.org", intern=TRUE),
"cran.r-project.org is alive"))
setwd("/home/edd/cranberries")
dbcon <- dbConnect(dbDriver("SQLite"), dbname = databasefile)
repos <- dbGetQuery(dbcon,
paste("select max(id) as id, desc, url ",
"from repos where desc!='omegahat' group by desc")
# ...
That's really all there is. Of course, there are other queries later on...
You easily test all SQL queries in the sqlite client before trying from R, or trying directly from R.
Edit: As the above was apparently too terse, here is an example straight from the documentation:
con <- dbConnect(SQLite(), ":memory:") ## in-memory, replace with file
data(USArrests)
dbWriteTable(con, "arrests", USArrests)
res <- dbSendQuery(con, "SELECT * from arrests")
data <- fetch(res, n = 2)
data
dbClearResult(res)
dbGetQuery(con, "SELECT * from arrests limit 3")

Resources