How to bulk insert data with the string of time in mongoid - mongoid

class Topic
include Mongoid::Document
#....
field :public_at, type: DateTime
#...
end
Topic.collection.insert([{public_at: "2013-10-30 11:45:56"}])
$> Topic.first
Hirb Error: undefined method `getlocal' for "2013-10-30 11:45:56":String
Did not convert to Time.

The problem is that you are inserting a String where you want a Time
at the Moped driver level for batch (Array of Hashes) insert.
MongoDB supports BSON type UTC datetime, which is mapped to Ruby Time by the Moped driver.
The following test shows what you want with conversion of the String to a Time using Time#parse.
Note that your public_at field is a DateTime as specified at the model level for Topic.
Hope that this helps.
test/unit/topic_test.rb
require 'test_helper'
class TopicTest < ActiveSupport::TestCase
def setup
Topic.delete_all
puts
end
test "0. mongoid version" do
puts "Mongoid::VERSION:#{Mongoid::VERSION}\nMoped::VERSION:#{Moped::VERSION}"
end
test "batch insert" do
time = Time.parse("2013-10-30 11:45:56")
Topic.collection.insert([{public_at: time}])
assert_equal 1, Topic.count
assert_equal DateTime, Topic.first.public_at.class
p Topic.first
end
end
$ rake test
Run options:
# Running tests:
[1/2] TopicTest#test_0._mongoid_version
Mongoid::VERSION:3.1.5
Moped::VERSION:1.5.1
[2/2] TopicTest#test_batch_insert
#<Topic _id: 528c23e68ce7aa1e667734de, public_at: 2013-10-30 15:45:56 UTC>
Finished tests in 0.051755s, 38.6436 tests/s, 38.6436 assertions/s.
2 tests, 2 assertions, 0 failures, 0 errors, 0 skips

Related

SoapUI NG Pro - Executing an UPDATE script in SoapUI using Groovy

I need to be able to execute an update SQL script, but it isn't working
Here is a link to the site that I used for reference:
https://groovyinsoapui.wordpress.com/tag/sql-eachrow-groovy-soapui/
Here is the format of the code that I ended up writing (due to the nature of the work I am doing, I am unable to provide the exact script that I wrote)
import groovy.sql.Sql
def groovyUtils = new com.eviware.soapui.support.GroovyUtils(context)
groovyUtils.registerJdbcDriver("com.microsoft.sqlserver.jdbc.SQLServerDriver")
def connectString = "jdbc:microsoft:sqlserver://:;databaseName=?user=&password="
sql = Sql.newInstance(connectString) // TEST YOUR CONNECT STRING IN A SQL BROWSER
sql.executeUpdate("UPDATE TABLE SET COLUMN_1 = 'VALUE_1' WHERE COLUMN_2 = 'VALUE_2'")
The response that I am getting is:
Script-result: 0
I also tried to use:
sql.execute("UPDATE TABLE SET COLUMN_1 = 'VALUE_1' WHERE COLUMN_2 = 'VALUE_2'")
Which returns the following response:
Script-result: false
From what you say it seems that no row has COLUMN_2 = 'VALUE_2', so then number of updated rows is 0.
I would first check that statement on Management Studio just to make sure.

save huge xml from sql to web

In sqlserver I have a function which generates a complex xml of all products with several tables joined: location, suppliers, orders etc.
No problem in that, it runs in 68 sec and produces around 450MB.
It should only be called occationally during migration to another server, so it doesn't matter it takes some time.
I want to make this available for download over webserver.
I've tried some variations of this in classic asp:
Response.Buffer = false
set rs=conn.execute("select cast(dbo.exportXML() as varchar(max)) as res")
response.write rs("res")
But I just get a standard
An error occurred on the server when processing the URL. Please contact the system administrator.
If you are the system administrator please click here to find out more about this error.
Not my usual custom 500-errorhandler, so I'm not sure how to find the error.
The problem is in response.write rs("res"), if i just do
temp = rs("res")
the script runs, but displays nothing of cause; if I then
response.write temp
I get the same failure.
So the problem is writing such a ling string.
Can I save the file from tsql directly; and run the job periodically from sql agent?
I found that there seems to be a limit on how much data can be written at once using Response.Write. The workaround I used was to break the data into chunks like this:
Dim Data, Done
Done = False
Do While Not Done
Data = RecordSet(0).GetChunk(8192)
If Not Len(Data) = 0 Then
Response.Write Data
Else
Done = True
End If
Loop
Try this:
Response.ContentType = "text/xml"
rs.CursorLocation = 3
rs.Open "select cast(dbo.exportXML() as varchar(max)) as res",conn
'Persist the Recordset in XML format to the ASP Response object.
'The constant value for adPersistXML is 1.
rs.Save Response, 1

How to retrieve as a text the value of a very long varcharmax in SSMS

I can't find the exact solution to this problem. I have a SQL script that creates another very long script in different steps.
What i do is, along the script, to add new pieces of script to a varchar(max) using concatenation. The final script is so long that it's difficult for me to get it. I use the following as the final instruction:
SELECT [processing-instruction(x)] = #MyScript
FOR XML PATH(''), TYPE;
In this way I can manage to get quite long results with this but sometimes the result is so long that it seems SSMS runs out of memory.
I tried saving my variable #MyScript by selecting it and saving the result as a text or a file but it saves less than 20K characters. I have set the XML max output length as unlimited and it seems to work but when I click on the result cell with the blue content (the xml with the script) then SSMS freezes.
The nice thing is that APPEARENTLY the script is generated quite fast (I am logging with print the different steps) but I can't see the results of my efforts.
Is there some way i can get hold of the content of this lengthy varchar(max) ?
Thanks
Create a procedure that selects the variable as output.
SELECT #MyScript XmlData
FOR XML PATH(''), TYPE;
Then go to the command line and execute:
bcp "exec sp_OutputXml" queryout MyScript.xml -w -T -S <server> -d <database>
If you wanted to do it all with T-SQL you could run the bcp command with xp_cmdshell if you have that enabled on your server.
If you want to save the contents of a MAX type variable or field -- VARCHAR(MAX), NVARCHAR(MAX), and VARBINARY(MAX) -- to a file, you can create a SQLCLR stored procedure or function ( I would choose function so that it can be used inline in a query to saved the contents of a field without first transferring it to a variable, not to mention being set-based and all ).
For saving a string, you can probably get away with doing something as simple as File.WriteAllText. Something along the lines of:
C# code:
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.IO;
public class SaveStuff
{
[SqlFunction(IsDeterministic = false, IsPrecise = true)]
public static SqlInt32 SaveToFile([SqlFacet(MaxSize = 500)] SqlString FilePath,
[SqlFacet(MaxSize = -1)] SqlString TextToSave)
{
File.WriteAllText(FilePath.Value, TextToSave.Value);
return TextToSave.Value.Length;
}
}
Please note:
due to accessing the file system, the assembly will need to be registered with PERMISSION_SET = EXTERNAL_ACCESS. While it is easier to set the database to TRUSTWORTHY ON, it is far better to:
sign the assembly and use a password to protect the key file.
create an Asymmetric Key from the DLL
create a Login from that Asymmetric Key
grant the Login the EXTERNAL ACCESS ASSEMBLY permission
the code above is using the system default encoding, which may or may not match how the string is encoded within SQL Server. If necessary, there is another overload of File.WriteAllText that accepts a 3rd input parameter to set the encoding.
the above C# code does not test either input parameter for .IsNull as it is better to create the T-SQL wrapper object using WITH RETURNS NULL ON NULL INPUT as it skips calling this method entirely if either input param is NULL.
Test query:
DECLARE #Bob NVARCHAR(MAX);
SET #Bob = REPLICATE(CONVERT(NVARCHAR(MAX), N'This is just a test, yo! '), 1000000);
SELECT LEN(#Bob); -- 25,000,000
SET STATISTICS TIME ON;
SELECT dbo.SaveToFile(N'C:\TEMP\StackOverflow\z.txt', #Bob); -- 25,000,000
SET STATISTICS TIME OFF;
On my slow laptop, it exported 23.8 MB (25,000,000 bytes) in:
CPU time = 94 ms, elapsed time = 188 ms.
and, adding a 0 to the REPLICATE function, exported 238 MB (250,000,000 bytes) in:
CPU time = 1704 ms, elapsed time = 8714 ms.
(total elapsed time was 33 seconds, so it took 24 seconds to generate the value to be saved)
Now, if you don't want to mess with creating the SQLCLR assembly and the Asymmetric Key, etc., this function (named File_WriteFile), and many others (including File_WriteFileBinary), are available in the SQL# library (which I am the author of). Please note that the File_* functions are only available in the Full version and not in the Free version.
Another option that avoids SSMS from dealing with the full contents of the large variable is having SQL Server email you the contents of that variable as a SQL script. Something like:
sp_send_dbmail #query = 'SELECT #MyScript AS [--MyScript]',
#recipients = 'your#email.address',
#subject = 'My SQL Script',
#attach_query_result_as_file = 1,
#query_attachment_filename = 'DoLotsOfThings.sql',
#query_no_truncate = 1;
Please note that the default maximum file attachment size is 1 MB. I am not sure if that limitation applies to query results that are attached as files, so you might need to first run the following to increase that limit:
sysmail_configure_sp 'MaxFileSize', '52428800'; -- set Max to 50 MB
More info:
sp_send_dbmail
sysmail_configure_sp

TransactionManagementError in test of django model

In django 1.6, I try test a unique field.
# model tag
class Tag(models.Model):
name = models.CharField(max_length=30, unique=True, null=True)
def __unicode__(self):
return self.name
# test unique of name field
class TagTest(TestCase):
def test_tag_unique(self):
t1 = Tag(name='music')
t1.save()
with self.assertRaises(IntegrityError):
t2 = Tag(name='music')
t2.save()
self.assertEqual(['music'], [ t.name for t in Tag.objects.all() ])
with the last line I get this message
"An error occurred in the current transaction. You can't "
TransactionManagementError: An error occurred in the current transaction. You can't execute queries until the end of the 'atomic' block.
why ?
EDIT
I get this with sqlite as DB (Development Environment).
If you're using PostgreSQL then this is why.
Edit:
See this commit. Since it's in base backend it seems like all backends now share common behavior. Despite the backend used, if the transaction needs rollback an error is raised.
Tip:
Use Model.objects.create(attr="value") instead of create and .save().

Generate SQL string using schema.CreateTable fails with postgresql ARRAY

I'd like to generate the verbatim CREATE TABLE .sql string from a sqlalchemy class containing a postgresql ARRAY.
The following works fine without the ARRAY column:
from sqlalchemy.dialects.postgresql import ARRAY
from sqlalchemy import *
from geoalchemy import *
from sqlalchemy.ext.declarative import declarative_base
metadata=MetaData(schema='refineries')
Base=declarative_base(metadata)
class woodUsers (Base):
__tablename__='gquery_wood'
id=Column('id', Integer, primary_key=True)
name=Column('name', String)
addr=Column('address', String)
jsn=Column('json', String)
geom=GeometryColumn('geom', Point(2))
this woks just as i'd like it to:
In [1]: from sqlalchemy.schema import CreateTable
In [3]: tab=woodUsers()
In [4]: str(CreateTable(tab.metadata.tables['gquery_wood']))
Out[4]: '\nCREATE TABLE gquery_wood (\n\tid INTEGER NOT NULL, \n\tname VARCHAR, \n\taddress VARCHAR, \n\tjson VARCHAR, \n\tgeom POINT, \n\tPRIMARY KEY (id)\n)\n\n'
however when I add a postgresql ARRAY column in it fails:
class woodUsers (Base):
__tablename__='gquery_wood'
id=Column('id', Integer, primary_key=True)
name=Column('name', String)
addr=Column('address', String)
types=Column('type', ARRAY(String))
jsn=Column('json', String)
geom=GeometryColumn('geom', Point(2))
the same commands as above result in a long traceback string ending in:
/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.pyc in _compiler_dispatch(self, visitor, **kw)
70 getter = operator.attrgetter("visit_%s" % visit_name)
71 def _compiler_dispatch(self, visitor, **kw):
---> 72 return getter(visitor)(self, **kw)
73 else:
74 # The optimization opportunity is lost for this case because the
AttributeError: 'GenericTypeCompiler' object has no attribute 'visit_ARRAY'
If the full traceback is useful, let me know and I will post.
I think this has to do with specifying a dialect for the compiler (?) but im not sure. I'd really like to be able to generate the sql without having to create an engine. I'm not sure if this is possible though, thanks in avance.
There's probably a complicated solution that involves digging in sqlalchemy.dialects.
You should first try it with an engine though. Fill in a bogus connection url and just don't call connect().

Resources