I have a MSSQL table with a field of type image that has some text stored in it.
The field has data that looks like this:
54004800490053002000490053002000410020004c00490047004800540041005200540020004f0052004400450052002e00200020004c004900470048005400410052005400200049005300200044004f0049004e004700200054004800450020004600410042002e000d000a004c00490047004800540041005200540020005300480049005000500049004e004700200054004f00200043005500530054004f004d004500520020003c0038002d00320033002d00310037003e000d000a000d000a0043006f006e006e00690065002c00200070006c00650061007300650020007000720069006e007400200073007400690063006b00650072007300200066006f0072002000650061006300680020006f007500740065007200200062006f00780020007400680061007400200069006e0063006c0075006400650073002000740068006500200069006e0066006f003a000d000a0028003100290020006f00660020002800310029000d000a004c004100320020005400680072006500650020004c00610072006700650020000d000a00380036005c0022004c0020007800200036005c002200570020007800200038005c00220048000d000a004e00610074007500720061006c000d000a005000320030003900380031003000350020004d004f0044002000500069007a007a00610020005300750067006100720068006f007500730065002c00200055005400
In PHP I can write a SQL query to convert that data like this: SELECT CAST(CAST(CUST_ORDER_BINARY.BITS as VARBINARY(8000)) as VARCHAR(8000)) as result FROM CUST_ORDER_BINARY WHERE CUST_ORDER_ID = 'CO-299403S';
When I try the same thing in Ruby I get a result like this:
specs = VisualCustomer.connection.exec_query(sql).first
{"result"=>"T\u0000H\u0000I\u0000S\u0000 \u0000I\u0000S\u0000 \u0000A\u0000 \u0000L\u0000I\u0000G\u0000H\u0000T\u0000A\u0000R\u0000T\u0000 \u0000O\u0000R\u0000D\u0000E\u0000R\u0000.\u0000 \u0000 \u0000L\u0000I\u0000G\u0000H\u0000T\u0000A\u0000R\u0000T\u0000 \u0000I\u0000S\u0000 \u0000D\u0000O\u0000I\u0000N\u0000G\u0000 \u0000T\u0000H\u0000E\u0000 \u0000F\u0000A\u0000B\u0000.\u0000\r\u0000\n\u0000L\u0000I\u0000G\u0000H\u0000T\u0000A\u0000R\u0000T\u0000 \u0000S\u0000H\u0000I\u0000P\u0000P\u0000I\u0000N\u0000G\u0000 \u0000T\u0000O\u0000 \u0000C\u0000U\u0000S\u0000T\u0000O\u0000M\u0000E\u0000R\u0000 \u0000<\u00008\u0000-\u00002\u00003\u0000-\u00001\u00007\u0000>\u0000\r\u0000\n\u0000\r\u0000\n\u0000C\u0000o\u0000n\u0000n\u0000i\u0000e\u0000,\u0000 \u0000p\u0000l\u0000e\u0000a\u0000s\u0000e\u0000 \u0000p\u0000r\u0000i\u0000n\u0000t\u0000 \u0000s\u0000t\u0000i\u0000c\u0000k\u0000e\u0000r\u0000s\u0000 \u0000f\u0000o\u0000r\u0000 \u0000e\u0000a\u0000c\u0000h\u0000 \u0000o\u0000u\u0000t\u0000e\u0000r\u0000 \u0000b\u0000o\u0000x\u0000 \u0000t\u0000h\u0000a\u0000t\u0000 \u0000i\u0000n\u0000c\u0000l\u0000u\u0000d\u0000e\u0000s\u0000 \u0000t\u0000h\u0000e\u0000 \u0000i\u0000n\u0000f\u0000o\u0000:\u0000\r\u0000\n\u0000(\u00001\u0000)\u0000 \u0000o\u0000f\u0000 \u0000(\u00001\u0000)\u0000\r\u0000\n\u0000L\u0000A\u00002\u0000 \u0000T\u0000h\u0000r\u0000e\u0000e\u0000 \u0000L\u0000a\u0000r\u0000g\u0000e\u0000 \u0000\r\u0000\n\u00008\u00006\u0000\\\u0000\"\u0000L\u0000 \u0000x\u0000 \u00006\u0000\\\u0000\"\u0000W\u0000 \u0000x\u0000 \u00008\u0000\\\u0000\"\u0000H\u0000\r\u0000\n\u0000N\u0000a\u0000t\u0000u\u0000r\u0000a\u0000l\u0000\r\u0000\n\u0000P\u00002\u00000\u00009\u00008\u00001\u00000\u00005\u0000 \u0000M\u0000O\u0000D\u0000 \u0000P\u0000i\u0000z\u0000z\u0000a\u0000 \u0000S\u0000u\u0000g\u0000a\u0000r\u0000h\u0000o\u0000u\u0000s\u0000e\u0000,\u0000 \u0000U\u0000T\u0000"}
So the data is "almost" there. :)
I've tried gsubing to remove the \u0000 from the result but that's not working, obviously.
** EDIT 1 **
So, for some reason, getting the data from MSSQL into ruby is causing some kind of partial translation. I never get the raw data from the field, instead I get the "semi-translated" data. Even if I just query it, it still comes out like
"T\x00H\x00I\x00S\x00 \x00I\x00S\x00 \x00A\x00...
I tried to put it back doing:
s = order_specs.each_byte.map { |b| b.to_s(16) }.join
Then, when I do:
order_specs = s.scan(/.{2}(?=0{2})/).map{|s| s.to_i(16)}.pack("c*").tr("\x02", " ")
I just get an empty string. :/
That happens when you're inspecting the data, but when you write it will be fine:
Example:
$ ruby -e 'bin = File.read("/bin/ls");p bin; File.open("/tmp/file","w+"){|f| f.write bin}'
"\u007FELF\u0002\u0001\u0001\u0000\u0000\u0000 ...
....
$ md5sum /bin/ls
84b7b042405dfc79f2afe9b12d6b931d /bin/ls
$ md5sum /tmp/file
84b7b042405dfc79f2afe9b12d6b931d /tmp/file
So here we read a binary file /bin/ls and wrote it to another file /tmp/file as you can the the checksums are the identical.
s = "54004800490053002000490053002000410020004c00490047004800540041005200540020004f0052004400450052002e00200020004c004900470048005400410052005400200049005300200044004f0049004e004700200054004800450020004600410042002e000d000a004c00490047004800540041005200540020005300480049005000500049004e004700200054004f00200043005500530054004f004d004500520020003c0038002d00320033002d00310037003e000d000a000d000a0043006f006e006e00690065002c00200070006c00650061007300650020007000720069006e007400200073007400690063006b00650072007300200066006f0072002000650061006300680020006f007500740065007200200062006f00780020007400680061007400200069006e0063006c0075006400650073002000740068006500200069006e0066006f003a000d000a0028003100290020006f00660020002800310029000d000a004c004100320020005400680072006500650020004c00610072006700650020000d000a00380036005c0022004c0020007800200036005c002200570020007800200038005c00220048000d000a004e00610074007500720061006c000d000a005000320030003900380031003000350020004d004f0044002000500069007a007a00610020005300750067006100720068006f007500730065002c00200055005400"
Code:
puts s.scan(/.{2}(?=0{2})/).map{|s| s.to_i(16)}.pack("c*")
Output:
THISISALIGHTARTORDER.LIGHTARTISDOINGTHEFAB.
LIGHTARTSHIINGTOCUSTOMER<8-23-17>
Connie,leaserintstickersforeachouterboxthatincludestheinfo:
(1)of(1)
LA2ThreeLarge
86\"Lx6\"Wx8\"H
Natural
29815MODizzaSugarhouse,UT
Note: Some characters are unprintable, so they do not appear in this page. See the edit page of this answer for detail.
Or, if you replace "\x02" with a space,
puts s.scan(/.{2}(?=0{2})/).map{|s| s.to_i(16)}.pack("c*").tr("\x02", " ")
you get:
THIS IS A LIGHTART ORDER. LIGHTART IS DOING THE FAB.
LIGHTART SHIING TO CUSTOMER <8-23-17>
Connie, lease rint stickers for each outer box that includes the info:
(1) of (1)
LA2 Three Large
86\"L x 6\"W x 8\"H
Natural
29815 MOD izza Sugarhouse, UT
I finally figured this out. I needed to do string.gsub("\u0000", '')
So, I was getting the data from the MSSQL database correctly it seemed, but I that null byte was really throwing things off and was being sent to the front end where it was appearing on the page. i swear i tried gsubing before but for whatever reason it wasn't working. I tried it again now before when the response is being formed and it is now being sent correctly.
Related
I have code running on a microcontroller, and I am parsing the control command it will receive.
The command is JSON, and looks approximately like this...
{[PARAMTER_1]:[VALUE_1], [...] , [PARAMTER_N]:[VALUE_N]}
So it could be...
{"opMode"=1, "StringA" = "Hello"}
or it could be
{"brightness"=5, "StringB"="Goodbye"}
How can I test to see if a parameter is included?
For example, if I process with this code:
String command = <JSON COMMAND>
char conString[200];
command.toCharArray(conString, sizeof(conString));
StaticJsonBuffer<1000> jsonBuffer;
JsonObject& root = jsonBuffer.parseObject(conString);
How can I test the resultant root["PARAMTER"] outputs to see if they existed? The problem is that if I just do something like...
resultString = root["StringA"];
resultInt = root["opMode"];
If those exist but are "" and 0, they will return the same values as if they were not included in the JSON in the first place.
Do I have to use something like command.indexOf("opMode") on the raw JSON string to make sure it's there? (In this example, that would work. But with more complicated JSON, that seems to be a lot of work to make sure a response is valid/existing)
I'm using this json content (I'm open to suggestions on better formatting here):
{"forwardingZones": [{"name": "corp.mycompany.com","dnsServers": ["192.168.0.1","192.168.0.2"]}]}
Note: we may add more items to this list as we scale out, both more IPs and more names, hence the "join(',')" in the end of the code below.
And I'm trying to loop through it to get this result:
corp.mycompany.com=192.168.0.1;192.168.0.2
Using this code:
forward_zones = node['DNS']['forward_zones'].each do |forwarded_zone|
forwarded_zone_name = forwarded_zone['name']
forwarded_zone_dns_servers = forwarded_zone['dns_servers'].join(';')
"#{forwarded_zone_name}=#{forwarded_zone_dns_servers}"
end.join(',')
This is the result that I get:
{"dnsServers"=>["192.168.0.1", "192.168.0.2"], "name"=>"corp.mycompany.com"}
What am i doing wrong...?
x.each returns x. You want x.map.
When I try to upload a file with apostrophe, I get the error:
Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request.
if the file name is test's.pdf, I get the error. But if I change the name to test.pdf, there is no error.
Does anyone know why?
Thanks
I had a similar situation where I was dynamically creating filenames for pages that created excel files from query results. The approach I took was to create a function that replaced all the bad characters with something. Here is part of that function.
<cfargument name="replacementString" required="no" default=" ">
<cfscript>
var inValidFileNameCharacters = "[/\\*'?[\]:><""|]";
return reReplace (arguments.fileNameIn, inValidFileNameCharacters, arguments.replacementString, "all");
</cfscript>
You might want to consider an opposite approach. Instead of declaring invalid characters and replacing them, declare valid ones and replace anything that is not in the list of valid characters.
I suggest making this a function that's available on all appropriate pages. How you do that depends on your situation.
My guess is that the apostrophe is one of those multi-character apostrophes that Microsoft Word often uses. A character like that may not be a valid character for your OS file system.
You may want to re-code the system to use a temporary file on upload and then rename it to a valid file name after the upload is successful.
Here's some basic trouble shooting info.
Wrap your code in a try/catch block and dump the full error to the page output. Examples of using try/catch/dump below. The examples below force an error by dividing by zero.
For tag based cfml:
<cftry>
<cfset offendingCode = 1 / 0>
<cfcatch type="any">
<cfdump var="#cfcatch#" label="cfcatch">
</cfcatch>
</cftry>
For cfscript cfml:
<cfscript>
try {
offendingCode = 1 / 0;
} catch (any e) {
writeDump(var=e, label="Exception");
}
</cfscript>
Problem:
Hello, I have been struggling recently in my programming endeavours. I have managed to receive the output below from Google Speech to Text, but I cannot figure out how draw data from this block.
Excerpt 1:
[VoiceMain]: Successfully initialized
{"result":[]}
{"result":[{"alternative":[{"transcript":"hello","confidence":0.46152416},{"transcript":"how low"},{"transcript":"how lo"},{"transcript":"how long"},{"transcript":"Polo"}],"final":true}],"result_index":0}
[VoiceMain]: Successfully initialized
{"result":[]}
{"result":[{"alternative":[{"transcript":"hello"},{"transcript":"how long"},{"transcript":"how low"},{"transcript":"howlong"}],"final":true}],"result_index":0}
Objective:
My goal is to extract the string "hello" (without the quotation marks) from the first transcript of each block and set it equal to a variable. The problem arises when I do not know what the phrase will be. Instead of "hello", the phrase may be a string of any length. Even if it is a different string, I would still like to set it to the same variable to which the phrase "hello" would have been set to.
Furthermore, I would like to extract the number after the word "confidence". In this case, it is 0.46152416. Data type does not matter for the confidence variable. The confidence variable appears to be more difficult to extract from the blocks because it may or may not be present. If it is not present, it must be ignored. If it is present however, it must be detected and stored as a variable.
Also please note that this text block is stored within a file named "CurlOutput.txt".
All help or advice related to solving this problem is greatly appreciated.
You could do this with regex, but then I am assuming you will want to use this as a dict later in your code. So here is a python approach to building this result as a dictionary.
import json
with open('CurlOutput.txt') as f:
lines = f.read().splitlines()
flag = '{"result":[]} '
for line in lines: # Loop through each lin in file
if flag in line: # check if this is a line with data on it
results = json.loads(line.replace(flag, ''))['result'] # Load data as a dict
# If you just want to change first index of alternative
# results[0]['alternative'][0]['transcript'] = 'myNewString'
# If you want to check all alternative for confidence and transcript
for result in results[0]['alternative']: # Loop over each alternative
transcript = result['transcript']
confidence = None
if 'confidence' in result:
confidence = result['confidence']
# now do whatever you want with confidence and transcript.
I have a data frame DF which contains numerous variables. Each variable is present twice because I am conducting an analysis of "couples".
Among others, DF has a series of indicators of diversity :
DF$div1.1, DF$div2.1, .... , DF$divN.1, DF$div.1.2, ..., DF$divN.2
Similarly, it has a series of indicators of another characteristic:
DF$char1.1, DF$char2.1, .... , DF$charM.1, DF$char.1.2, ..., DF$charM.2
Here's a link to an example of DF: http://shorttext.com/5d90dd64
Each time the ".1", ".2" stand for the couple member considered.
My goal:
For each indicator divI and charJ, I want to create another variable DF$divchar that takes the value DF$divI.1 when DF$charJ.1>DF$charJ.2; and DF$divI.2 when DF$charJ.1<DF$charJ.2.
Here is the solution I came up with, it seems somehow very intricate and sometimes behaves in strange ways:
I created a series of binary variables that take the value one if DF$charJ.1>DF$charJ.2. The are stored under DF$CharMax.1.
Here's how I created it:
DF$CharMax.1 <- as.data.frame(
sapply(1:length(nam),
function(n)
as.numeric(DF[names(DF)==names.1[n]]
>DF[names(DF)==names.2[n]])
))
I created the function BinaryExtract:
BinaryExtract <- function(var1, var2, extract) {var1*extract +var2*(1-extract)}
I created the matrix NameFull that contains all the possible combinations of div and char, separated with "YY"
NameFull <- sapply(c("div1",...,"divN")
, function(nam) paste(nam, names(DF$YMax.1), sep="YY")
And then I create all my variables:
DF[, as.vector(NameFull)] <- lapply(as.vector(NameFull), function(e)
BinaryExtract(DF[,paste0(unlist(strsplit(e,"YY"))[1],".1")]
, DF[, paste0(unlist(strsplit(e,"YY"))[1],".1")]
, DF$charMax.1[unlist(strsplit(e,"YY"))[2]]))
My Problem
A. It looks like a very complicated solution for something that simple. What am I missing?
B. Moreover, when I print DF, just typing DF in the command window, I do not see the variables NameFull. They seem to appear with the names of char.
Here's what I get: http://shorttext.com/5d9102c
Similarly, I have tried to change all their names to get rid of the "YY" and it does not seem to work:
names(DF[, as.vector(NameFull)]) <- as.vector(c("div1",...,"divN"), sapply(, function(nam)
paste(nam, names(DF$YMax.1), sep=".")))
When I look at names(DF), I keep getting the old names with the "YY"
However, I do get a result if I explicitly call for them
> DF[,"divIYYcharJ"]
I would really appreciate any suggestion, comment and explanation. I am quite new to R ad was more used to Stata. I feel there is something deeply inefficient here. Thanks