opa database : how to know if value exists in database? - database

I'd like to know if a record is present in a database, using a field different from the key field.
I've try the following code :
function start()
{
jlog("start db query")
myType d1 = {A:"rabbit", B:"poney"};
/myDataBase/data[A == d1.A] = d1
jlog("db write done")
option opt = ?/myDataBase/data[B == "rabit"]
jlog("db query done")
match(opt)
{
case {none} : <>Nothing in db</>
case {some:data} : <>{data} in database</>
}
}
Server.start(
{port:8092, netmask:0.0.0.0, encryption: {no_encryption}, name:"test"},
[
{page: start, title: "test" }
]
)
But the server hang up, and never get to the line jlog("db query done"). I mispell "rabit" willingly. What should I've done ?
Thanks

Indeed it fails with your example, I have a "Match failure 8859742" exception, don't you see it?
But do not use ?/myDataBase/data[B == "rabit"] (which should have been rejected at compile time - bug report sent) but /myDataBase/data[B == "rabit"] which is a DbSet. The reason is when you don't use a primary key, then you can have more than one value in return, ie a set of values.
You can convert a dbset to an iter with DbSet.iterator. Then manipulate it with Iter: http://doc.opalang.org/module/stdlib.core.iter/Iter

Related

invalid sign in external "numeric" value error from Npgsql Binary copy

I get this error on using Npgsql Binary copy (see my implementation at the bottom):
invalid sign in external "numeric" value
The RetailCost column on which it fails has the following PostgreSQL properties:
type: numeric(19,2)
nullable: not null
storage: main
default: 0.00
The PostgreSQL log looks like this:
2019-07-15 13:24:05.131 ACST [17856] ERROR: invalid sign in external "numeric" value
2019-07-15 13:24:05.131 ACST [17856] CONTEXT: COPY products_temp, line 1, column RetailCost
2019-07-15 13:24:05.131 ACST [17856] STATEMENT: copy products_temp (...) from stdin (format binary)
I don't think it should matter, but there are only zero or positive RetailCost values (no negative or null ones)
My implementation looks like this:
using (var importer = conn.BeginBinaryImport($"copy {tempTableName} ({dataColumns}) from stdin (format binary)"))
{
foreach (var product in products)
{
importer.StartRow();
importer.Write(product.ManufacturerNumber, NpgsqlDbType.Text);
if (product.LastCostDateTime == null)
importer.WriteNull();
else
importer.Write((DateTime)product.LastCostDateTime, NpgsqlDbType.Timestamp);
importer.Write(product.LastCost, NpgsqlDbType.Numeric);
importer.Write(product.AverageCost, NpgsqlDbType.Numeric);
importer.Write(product.RetailCost, NpgsqlDbType.Numeric);
if (product.TaxPercent == null)
importer.WriteNull();
else
importer.Write((decimal)product.TaxPercent, NpgsqlDbType.Numeric);
importer.Write(product.Active, NpgsqlDbType.Boolean);
importer.Write(product.NumberInStock, NpgsqlDbType.Bigint);
}
importer.Complete();
}
Any suggestions would be welcome
I caused this problem when using MapBigInt, not realizing the column was accidentally numeric(18). Changing the column to bigint fixed my problem.
The cause of the problem was that the importer.Write statements was not in the same order as the dataColumns

I am getting this error even after declaring scalar variable"'Must declare the scalar variable "#col_shipping_price"

I am trying to fetch data from the source and save into the Database but I am facing this issue even after declaring the scalar variable. find below the approach which I tried. The issue I am facing: System.Data.SqlClient.SqlException: 'Must declare the scalar variable "#col_shipping_price".'
class Program
{
static void Main(string[] args)
{
XmlSerializer deserializer = new XmlSerializer(typeof(AmazonEnvelope));
TextReader reader = new StreamReader(#"C:\Users\*********\Desktop\16315550943018039.xml");
object obj = deserializer.Deserialize(reader);
AmazonEnvelope XmlData = (AmazonEnvelope)obj;
reader.Close();
SqlConnection cnn = new SqlConnection(#"Data Source=ABDUL-TPS\TPSSQLSERVER;Initial Catalog=Zoho_Amz_API;User ID=zohoapiservice;Password=**********");
cnn.Open();
for (int i = 0; i < XmlData.Message.Count; i++)
{
string sqlquery = "if not exists (select * from tbl_AMZ_API_sample where col_sku like '" + XmlData.Message[i].Order.OrderItem.SKU + "') insert into tbl_AMZ_API_sample(col_amazon_order_id, col_merchant_order_id, col_purchase_date, col_last_updated_date, col_order_status, col_fulfillment_channel, col_sales_channel, col_order_channel, col_url, col_ship_service_level, col_product_name, col_sku, col_asin, col_number_of_items, col_item_status, col_quantity, col_currency, col_item_price, col_item_tax, col_shipping_price, col_shipping_tax, col_gift_wrap_price, col_gift_wrap_tax, col_item_promotion_discount, col_ship_promotion_discount, col_ship_city, col_ship_state, col_ship_postal_code, col_ship_country, col_promotion_ids, col_is_business_order, col_purchase_order_number, col_price_designation, col_fulfilled_by, col_last_update_time) values(#col_amazon_order_id, #col_merchant_order_id, #col_purchase_date, #col_last_updated_date, #col_order_status, #col_fulfillment_channel, #col_sales_channel, #col_order_channel, #col_url, #col_ship_service_level, #col_product_name, #col_sku, #col_asin, #col_number_of_items, #col_item_status, #col_quantity, #col_currency, #col_item_price, #col_item_tax, #col_shipping_price, #col_shipping_tax, #col_gift_wrap_price, #col_gift_wrap_tax, #col_item_promotion_discount, #col_ship_promotion_discount, #col_ship_city, #col_ship_state, #col_ship_postal_code, #col_ship_country, #col_promotion_ids, #col_is_business_order, #col_purchase_order_number, #col_price_designation, #col_fulfilled_by, #col_last_update_time)";
SqlCommand cmd = new SqlCommand(sqlquery, cnn);
for (int j = 0; j < XmlData.Message[i].Order.OrderItem.ItemPrice.Component.Count; j++)
{
cmd.Parameters.AddWithValue("#col_amazon_order_id", XmlData.Message[i].Order.AmazonOrderID);
if (XmlData.Message[i].Order.MerchantOrderID == null)
{
cmd.Parameters.AddWithValue("#col_merchant_order_id", DBNull.Value);
}
else
{
cmd.Parameters.AddWithValue("#col_merchant_order_id", XmlData.Message[i].Order.MerchantOrderID);
}
cmd.Parameters.AddWithValue("#col_purchase_date", XmlData.Message[i].Order.PurchaseDate);
cmd.Parameters.AddWithValue("#col_last_updated_date", Global.unique.ToString());
cmd.Parameters.AddWithValue("#col_order_status", XmlData.Message[i].Order.OrderStatus);
cmd.Parameters.AddWithValue("#col_fulfillment_channel", XmlData.Message[i].Order.FulfillmentData.FulfillmentChannel);
cmd.Parameters.AddWithValue("#col_sales_channel", XmlData.Message[i].Order.SalesChannel);
cmd.Parameters.AddWithValue("#col_ship_service_level", XmlData.Message[i].Order.FulfillmentData.ShipServiceLevel);
cmd.Parameters.AddWithValue("#col_product_name", XmlData.Message[i].Order.OrderItem.ProductName);
cmd.Parameters.AddWithValue("#col_sku", XmlData.Message[i].Order.OrderItem.SKU);
cmd.Parameters.AddWithValue("#col_asin", XmlData.Message[i].Order.OrderItem.ASIN);
cmd.Parameters.AddWithValue("#col_number_of_items", XmlData.Message[i].Order.OrderItem.NumberOfItems);
cmd.Parameters.AddWithValue("#col_item_status", XmlData.Message[i].Order.OrderItem.ItemStatus);
cmd.Parameters.AddWithValue("#col_quantity", XmlData.Message[i].Order.OrderItem.Quantity);
cmd.Parameters.AddWithValue("#col_currency", XmlData.Message[i].Order.OrderItem.ItemPrice.Component[j].Amount.Currency);
switch (XmlData.Message[i].Order.OrderItem.ItemPrice.Component[j].Type)
{
case "Principal":
cmd.Parameters.AddWithValue("#col_item_price", XmlData.Message[i].Order.OrderItem.ItemPrice.Component[j].Amount.Text);
cmd.Parameters.AddWithValue("#col_item_tax", XmlData.Message[i].Order.OrderItem.ItemPrice.Component[j].Amount.Text);
break;
case "Shipping":
cmd.Parameters.AddWithValue("#col_shipping_price", XmlData.Message[i].Order.OrderItem.ItemPrice.Component[j].Amount.Text);
break;
case "GiftWrap":
cmd.Parameters.AddWithValue("#col_gift_wrap_price", XmlData.Message[i].Order.OrderItem.ItemPrice.Component[j].Amount.Text);
break;
case "Shipping-Tax":
cmd.Parameters.AddWithValue("#col_shipping_tax", XmlData.Message[i].Order.OrderItem.ItemPrice.Component[j].Amount.Text);
break;
default:
cmd.Parameters.AddWithValue("#col_gift_wrap_tax", XmlData.Message[i].Order.OrderItem.ItemPrice.Component[j].Amount.Text);
break;
}
if (XmlData.Message[i].Order.OrderItem.Promotion == null)
{
cmd.Parameters.AddWithValue("#col_item_promotion_discount", 0);
cmd.Parameters.AddWithValue("#col_ship_promotion_discount", 0);
cmd.Parameters.AddWithValue("#col_promotion_ids", 0);
}
else
{
cmd.Parameters.AddWithValue("#col_item_promotion_discount", XmlData.Message[i].Order.OrderItem.Promotion.ItemPromotionDiscount);
cmd.Parameters.AddWithValue("#col_ship_promotion_discount", XmlData.Message[i].Order.OrderItem.Promotion.ShipPromotionDiscount);
cmd.Parameters.AddWithValue("#col_promotion_ids", XmlData.Message[i].Order.OrderItem.Promotion.PromotionIDs);
}
cmd.Parameters.AddWithValue("#col_ship_city", XmlData.Message[i].Order.FulfillmentData.Address.City);
cmd.Parameters.AddWithValue("#col_ship_state", XmlData.Message[i].Order.FulfillmentData.Address.State);
cmd.Parameters.AddWithValue("#col_ship_postal_code", XmlData.Message[i].Order.FulfillmentData.Address.PostalCode);
cmd.Parameters.AddWithValue("#col_ship_country", XmlData.Message[i].Order.FulfillmentData.Address.Country);
cmd.Parameters.AddWithValue("#col_is_business_order", XmlData.Message[i].Order.IsBusinessOrder);
cmd.Parameters.AddWithValue("#col_purchase_order_number", XmlData.Message[i].Order.PurchaseOrderNumber);
cmd.Parameters.AddWithValue("#col_price_designation", XmlData.Message[i].Order.OrderItem.PriceDesignation);
cmd.Parameters.AddWithValue("#col_fulfilled_by", XmlData.Message[i].Order.FulfilledBy);
cmd.Parameters.AddWithValue("#col_Order_Channel", DBNull.Value);
cmd.Parameters.AddWithValue("#col_url", DBNull.Value);
Console.WriteLine(XmlData.Message[i].Order.OrderItem.ItemPrice.Component[j].Type);
cmd.ExecuteNonQuery();
}
}
Console.ReadKey();
}
}
Someone help please.
You obviously know in general how to use parameterized queries, so why don't you use it in the SELECT part of the query here:
... like '" + XmlData.Message[i].Order.OrderItem.SKU + "' ...
You should also use parameters there. And the LIKE could be changed to an = unless XmlData.Message[i].Order.OrderItem.SKU includes wildcards, which seems unlikely.
On your problem:
You have #col_shipping_price in the INSERT part of your query apparently meant as a parameter. Yet you only set this parameter if XmlData.Message[i].Order.OrderItem.ItemPrice.Component[j].Type is Shipping in the switch(). That is, in all other cases #col_shipping_price stays as it is in the query and isn't replaced. SQL Server thinks it is a variable and rightfully complains that it wasn't declared. The same problem might occur with some of you other parameters it seems.
You have some options here.
Replace all the parameters all the time. Probably use DBNull if you have no actual value. Unless there are not null constraints on the columns that should work. If you do it like that there's also no need to rebuild the query in every iteration. Create it once, optionally Prepare() it and then just change the parameters' values in the iteration. Performance may benefit from that.
Build different queries depending on XmlData.Message[i].Order.OrderItem.ItemPrice.Component[j].Type. That way you can simply leave placeholders of unused parameters out.
And you should be careful when using AddWithValue(), even avoid using it completely. It needs to guess the data type of the columns on the database, sometimes miserably fails and produces funny errors. Better use the Add() overloads with explicit type arguments. Something to read on this topic: "AddWithValue is Evil"

Golang (w/gocql driver) not returning all entries in Cassandra DB

I have what appears to be a strange bug in either the gocql driver for Cassandra, or in the Cassandra database itself.
I am trying to do a simple write and then read all request in two separate functions. I would expect that I would get all entries on the read all request, but I am only getting the last entry in Cassandra.
Here is how I am doing the write:
util.CassSession, _ = util.CassCluster.CreateSession()
defer util.CassSession.Close()
keySpaceMeta, _ := util.CassSession.KeyspaceMetadata("platypus")
valC, exists := keySpaceMeta.Tables["cassmessage"]
if exists==true {
fmt.Println("cassmessage exists!!!")
}else{
fmt.Println("cassmessage doesnt exist!")
}
if valC!=nil{
fmt.Println("return from valC cassmessage: ", valC)
}
insertString:=`INSERT INTO cassmessage
(messagefrom, messageto, messagecontent)
VALUES('`+sendMsgReq.MessageFrom+`', '`
+sendMsgReq.MessageTo+`', '`+sendMsgReq.MessageContent+`')`
fmt.Println("insertString value: ", insertString)
err := util.CassSession.Query(insertString).Exec()
if err != nil {
fmt.Println("there was an error in appending data to cassmessage: ", err)
} else {
fmt.Println("inserted data into cassmessage successfully")
}
the terminal output from the above:
app_1 | [17:59:43][WEBSERVER] : cassmessage exists!!!
app_1 | [17:59:43][WEBSERVER] : return from valC cassmessage:
&{platypus cassmessage [] []
[0xc000400140] [] map[messagefrom:0xc0004000a0
messageto:0xc000400140 messagecontent:0xc000400000]
[messagecontent messagefrom messageto]}
app_1 | [17:59:43][WEBSERVER] : inserted data into cassmessage successfully
I am not entirely sure what the output of valC is returning, although it appears to be some sort of memory address which is a good sign. I also see that I am not getting any error on the write exec function which is hopeful.
Here is how I am doing the read:
util.CassSession, _ = util.CassCluster.CreateSession()
defer util.CassSession.Close()
keySpaceMeta, _ := util.CassSession.KeyspaceMetadata("platypus")
valC, exists := keySpaceMeta.Tables["cassmessage"]
queryString := `SELECT messageto, messagecontent, messagefrom FROM cassmessage WHERE messagefrom='`+mailReq.Email+`'`
//returns nothing, should return many rows
queryString2 := `SELECT messageto, messagecontent, messagefrom FROM cassmessage`
//returns only last entry, should return many rows
queryString3 := `SELECT * FROM cassmessage WHERE messagefrom='`+mailReq.Email+`'`
//returns nothing, should return many rows
queryAllString := `SELECT * FROM cassmessage`
//returns only last entry, should return many rows
var messageto string
var messagecontent string
var messagefrom string
iter := util.CassSession.Query(queryAllString).Iter()
for iter.Scan(&messageto, &messagecontent, &messagefrom) {
fmt.Println("Iter messageto: %v", messageto)
fmt.Println("Iter messagecontent: %v", messagecontent)
fmt.Println("Iter messagefrom: %v", messagefrom)
}
the terminal output from above:
app_1 | [18:09:54][WEBSERVER] : Iter messageto: %v xyz#xyz.com
app_1 | [18:09:54][WEBSERVER] : Iter messagecontent: %v a
app_1 | [18:09:54][WEBSERVER] : Iter messagefrom: %v abc#abc.com
This is not what I expect, as this is the output from the read, after multiple writes to the database. If you look at comments on the various queryString values I have tried 2 of them return nothing when I expect all entries to be returned, and 2 of them only return the last write entry (they are all symmetric queries to my knowledge).
Does anyone know why I cannot return multiple entries using Iter or why my four different values on the different query strings I have tried are returning different results?
Thank you.
I maybe shouldn't, but I'm going to keep this here in case someone else runs into the same problem. I wasn't making sure that my primary key in my table was unique. Doing something like this:
util.CassSession.Query("CREATE TABLE cassmessage(" +
"messageto text, messagefrom text, messagecontent text, uniqueID text, PRIMARY KEY (uniqueID))").Exec()
Managed to fix the issue.
Thanks to everyone who took a look and helped. Cheers!

Retry loop until condition met

I am trying to navigate my mouse on object but I want to create a condition that will check if "surowiec" is still on the screen, if not I want to skip loop and go to another one. After it finish the second one get back to first and repeat.
[error] script [ Documents ] stopped with error in line 12 [error] FindFailed ( can not find surowiec.png in R[0,0 1920x1080]#S(0) )
w_lewo = Location(345,400)
w_prawo = Location(1570,400)
w_gore = Location(345,400)
w_dol = Location(345,400)
surowiec = "surowiec.png"
while surowiec:
if surowiec == surowiec:
exists("surowiec.png")
if exists != None:
click("surowiec.png")
wait(3)
exists("surowiec.png")
elif exists == None:
surowiec = None
click(w_prawo)
wait(8)
surowiec = surowiec
How about a small example:
while True:
if exists(surowiec):
print('A')
click(surowiec)
else:
print('B')
break
A while loop that is True will always run, until it it meets a break to exit the loop. Also have a look at the functions that are available in Sikuli, it can somethimes be hard to find them, that they are available. So here are a few nice ones:
Link: Link 1 and Pushing keys and Regions
The commands that I found myself very usefull are is exists and if not exists, and find that will allow to locate an image on the screen. Then you don't have to find an image over and over again if it stays on the same location. image1 = find(surowiec)

postgres SQLSTATE : PQresultErrorField returns NULL

I am not able to get error details using the PQresultErrorField API after a query execution fails. Using PQerrorMessage on the connection gives the correct error (constraint violation xxx_pk etc etc) and PQresultStatus shows FATAL_ERROR.
However, when I use the API PQresultErrorField(result, PG_DIAG_SQLSTATE)), I get a NULL result. Other field-codes also gives me null results.
Does this API need to be compiled in ?
Postgres version is 9.2.1
Using libpq C library
It's supposed to return NULL only when it's not applicable.
That simple test works for me:
PGresult* res = PQexec(conn, "SELECT * FROM foobar");
if (res) {
if (PQresultStatus(res) == PGRES_FATAL_ERROR) {
char* p = PQresultErrorField(res, PG_DIAG_SQLSTATE);
if (p) {
printf("sqlstate=%s\n", p?p:"null");
}
}
}
Result:
sqlstate=42P01

Resources