Lua - SQLite3 isn't inserting rows into its database - database

I'm trying to build an expense app for Android phones, and I need a way to display the expenses. The plan (for my current step) is to allow the user to view their expenses. I want to show a calendar-like screen, and if there is at least one expense for a day, then use a different color for the button.
My problem is in inserting information to the sqlite3 table. Here is my code:
require "sqlite3"
--create path
local path = system.pathForFile("expenses.sqlite", system.DocumentsDirectory )
file = io.open( path, "r" )
if( file == nil )then
-- Doesn't Already Exist, So Copy it In From Resource Directory
pathSource = system.pathForFile( "expenses.sqlite", system.ResourceDirectory )
fileSource = io.open( pathSource, "r" )
contentsSource = fileSource:read( "*a" )
--Write Destination File in Documents Directory
pathDest = system.pathForFile( "expenses.sqlite", system.DocumentsDirectory )
fileDest = io.open( pathDest, "w" )
fileDest:write( contentsSource )
-- Done
io.close( fileSource )
io.close( fileDest )
end
db = sqlite3.open( path )
--setup the table if it doesn't exist
local tableSetup = [[CREATE TABLE IF NOT EXISTS expenses (id INTEGER PRIMARY KEY, amount, description, year, month, day);]]
db:exec(tableSetup)
local tableFill = [[INSERT INTO expenses VALUES (NULL,']] .. 15 .. [[',']] .. "Groceries" .. [[',']] .. 2013 .. [[',']] .. 4 .. [[',']] .. 8 ..[[');]]
db:exec(tableFill)
for row in db:nrows("SELECT * FROM expenses") do
print("hi")
if row.year == dateTable[i].year and row.month == dateTable[i].month and row.day == dateTable[i].day then
flag = dateTabel[i].day
end
end
I have looked everywhere to see if I've used the wrong sqlite3 commands wrong since I'm not very familiar to it, but I tried everything I found and nothing worked. The print("hi")
line doesn't execute, so that tells me that there are no rows in the table.
Also, if I say db:nrows("SELECT year, month, day FROM expenses"), sqlite3 gives me an error saying there is no year column. My overall guess is that I'm not inserting the information into the table properly, but I've tried everything I can think of. Can anyone help?

I figured out that there was an issue with the current version of sqlite3 and the one I have on my computer. Anyway, I changed a couple of lines and it works flawlessly. I changed the select statement and the for loop.
--sqlite statement
local check = "SELECT DISTINCT year, month, day FROM expenses WHERE year = '"..dateTable[i].year.."' AND month = '"..dateTable[i].month.."' AND day = '"..dateTable[i].day.."'"
--check if there is at least one expense for a given day
--if there isn't one, the for loop won't execute
for row in db:nrows(check) do
flag = row.day
end
Then I go on to create a button with a different color if the day number is equal to the flag variable.
This is all inside a another for loop which creates each dateTable[i].

Related

The merge into command on Snowflake loads only null values when loading data from S3

I'm having issues when running the command MERGE INTO on Snowflake. The data is located in a bucket on S3. The files format are .snappy.parquet.
The command runs well, it identifies the files in S3, but it loads only NULL values to the table. The total row numbers are also good.
I confirmed that #myExternalStageToS3 points to the right location by running a query which returned the expected values:
SELECT
$1:DAY,
$1:CHANNEL_CATEGORY,
$1:SOURCE,
$1:PLATFORM,
$1:LOB
#myExternalStageToS3
(FILE_FORMAT => 'sf_parquet_format')
As it is a new table with no records, the condition uses INSERT.
MERGE INTO myTable as target USING
(
SELECT
$1:DAY,
$1:CHANNEL_CATEGORY,
$1:SOURCE,
$1:PLATFORM,
$1:LOB
FROM #myExternalStageToS3
(FILE_FORMAT => 'sf_parquet_format')
) as src
ON target.CHANNEL_CATEGORY = src.$1:CHANNEL_CATEGORY
AND target.SOURCE = src.$1:SOURCE
WHEN MATCHED THEN
UPDATE SET
DAY= src.$1:DAY
,CHANNEL_CATEGORY= src.$1:CHANNEL_CATEGORY
,SOURCE= src.$1:SOURCE
,PLATFORM= src.$1:PLATFORM
,LOB= src.$1:LOB
WHEN NOT MATCHED THEN
INSERT
(
DAY,
CHANNEL_CATEGORY,
SOURCE,
PLATFORM,
LOB
) VALUES
(
src.$1:DAY,
src.$1:CHANNEL_CATEGORY,
src.$1:SOURCE,
src.$1:PLATFORM,
src.$1:LOB
);
The sf_parque_format was created with these details:
create or replace file format sf_parquet_format
type = 'parquet'
compression = auto;
Do you have any idea what am I missing?
The query inside USING part was altered(data type casts and aliases):
MERGE INTO myTable as target USING (
SELECT
$1:DAY::TEXT AS DAY,
$1:CHANNEL_CATEGORY::TEXT AS CHANNEL_CATEGORY,
$1:SOURCE::TEXT AS SOURCE,
$1:PLATFORM::TEXT AS PLATFROM,
$1:LOB::TEXT AS LOB
FROM #myExternalStageToS3
(FILE_FORMAT => 'sf_parquet_format')
) as src
ON target.CHANNEL_CATEGORY = src.CHANNEL_CATEGORY
AND target.SOURCE = src.SOURCE
WHEN MATCHED THEN
UPDATE SET
DAY= src.DAY
,PLATFORM= src.PLATFORM
,LOB= src.LOB
WHEN NOT MATCHED THEN
INSERT (
DAY,
CHANNEL_CATEGORY,
SOURCE,
PLATFORM,
LOB
) VALUES (
src.DAY,
src.CHANNEL_CATEGORY,
src.SOURCE,
src.PLATFORM,
src.LOB
);
The UPDATE part does not require ,CHANNEL_CATEGORY= src.CHANNEL_CATEGORY ,SOURCE= src.SOURCE as condition is already met by ON clasue.

What is wrong with my UPDATE statement WHERE NOT EXISTS?

What I am trying to accomplish is to update the ISCURRENT field to 'N' and the EFFECTIVE_END_DATE field to the current date if the record of its type does not have the most recent EFFECTIVE_START_DATE.
An error does not get thrown it just tells me "0 rows affected" but I created a record with a more recent EFFECTIVE_START_DATE which should affect the other record in the table that has the earlier EFFECTIVE_START_DATE.
Here is an image of the 2 records I'm using to test it out.
The record that has a KTEXT of '400 Atlantic' should be changed from this script to have an ISCURRENT ='N' and EFFECTIVE_END_DATE=GETDATE() because the record with the KTEXT of 500 Maria has a more recent EFFECTIVE_START_DATE
UPDATE [SAP].[src_gl_sap_m_cepct]
set ISCURRENT='N',
EFFECTIVE_END_DATE=GETDATE()
WHERE NOT EXISTS (SELECT [SPRAS],
[PRCTR],
MAX(EFFECTIVE_START_DATE)
FROM [SAP].[src_gl_sap_m_cepct] AS A
WHERE CONCAT([SAP].[src_gl_sap_m_cepct].[SPRAS],[SAP].[src_gl_sap_m_cepct].[PRCTR]) = CONCAT(A.[SPRAS],A.[PRCTR]
)
GROUP BY [SPRAS],[PRCTR]);
Thank you !
Correct me if I am wrong, but this part of your query
FROM [SAP].[src_gl_sap_m_cepct] AS A
WHERE CONCAT([SAP].[src_gl_sap_m_cepct].[SPRAS],[SAP].[src_gl_sap_m_cepct].[PRCTR]) = CONCAT(A.[SPRAS],A.[PRCTR]
can also be written like this (because you have a self join)
FROM [SAP].[src_gl_sap_m_cepct] AS A
WHERE CONCAT(A.[SPRAS], A.[PRCTR]) = CONCAT(A.[SPRAS], A.[PRCTR]
And like this I notice that you are simply comparing a value to the same value again.
Thus this will always evaluate as TRUE
And thus the not exists clause will never evaluate as true
And therefore no updates will happen.
I think something like this might work for you
UPDATE c
set c.ISCURRENT='N',
c.EFFECTIVE_END_DATE = GETDATE()
FROM SAP.src_gl_sap_m_cepct c
WHERE EXISTS ( select 1
FROM SAP.src_gl_sap_m_cepct AS A
WHERE CONCAT(c.SPRAS, c.PRCTR) = CONCAT(A.SPRAS, A.PRCTR)
AND A.EFFECTIVE_START_DATE > c.EFFECTIVE_START_DATE
)
If I understood correctly, the statement should be like this:
UPDATE c
set ISCURRENT='N',
EFFECTIVE_END_DATE = GETDATE()
FROM [SAP].[src_gl_sap_m_cepct] c
WHERE EXISTS (
SELECT 1
FROM [SAP].[src_gl_sap_m_cepct] AS A
WHERE CONCAT(c.[SPRAS], c.[PRCTR]) = CONCAT(A.[SPRAS],A.[PRCTR])
AND c.EFFECTIVE_START_DATE < A.EFFECTIVE_START_DATE
);

PHPSpreadsheet to update an existing file writes only the last record of the query

Hello i am trying to write an existing xlsx file using phpspreadsheet with setActiveSheetIndexByName(sheetname) and setcellvalue with reference and value, but it updates only the last record. spent more than 12 hours on this.
i tried foreach instead of while and used a counter to increment, but none worked.
<?php
include_once('db.php');
$prospect = $_REQUEST['prospect'];
require 'vendor/autoload.php';
use PhpOffice\PhpSpreadsheet\IOFactory;
use PhpOffice\PhpSpreadsheet\Spreadsheet;
use PhpOffice\PhpSpreadsheet\Writer\Xlsx;
$sql1 = mysqli_query($db,"select filename,sheetname, row, responsecol,compliancecol,response, compliance from spreadsheet where `prospect`='$prospect' and response <>'' order by row");
//$row=1;
while($row1 = mysqli_fetch_assoc($sql1))
{
$filename= $row1['filename']; //test.xlsx
$sheetname= $row1['sheetname']; // mysheet
$responsecol= $row1['responsecol'].$row1['row']; //D1
$response= $row1['response']; //response
$compliancecol= $row1['compliancecol'].$row1['row']; //C1
$compliance= $row1['compliance']; //compliance
$spreadsheet = \PhpOffice\PhpSpreadsheet\IOFactory::load($filename);
$spreadsheet->setActiveSheetIndexByName($sheetname)
->setCellValue($compliancecol,$compliance)
->setCellValue($responsecol,$response);
//$row++;
}
$writer = IOFactory::createWriter($spreadsheet, 'Xlsx');
$writer->save("newfile.xlsx");
exit;
?>
i wish each of the row from mysqli result updates each reference cell with value.
The easys way is to set a Limit of 1 to your MySQL query. That takes only one value from your data. If you will the last you should sort DESC.
$sql1 = mysqli_query($db,"select filename,sheetname, row, responsecol,compliancecol,response, compliance from spreadsheet where `prospect`='$prospect' and response <>'' order by row DESC LIMIT 1");

Csv file to a Lua table and access the lines as new table or function()

Currently my code have simple tables containing the data needed for each object like this:
infantry = {class = "army", type = "human", power = 2}
cavalry = {class = "panzer", type = "motorized", power = 12}
battleship = {class = "navy", type = "motorized", power = 256}
I use the tables names as identifiers in various functions to have their values processed one by one as a function that is simply called to have access to the values.
Now I want to have this data stored in a spreadsheet (csv file) instead that looks something like this:
Name class type power
Infantry army human 2
Cavalry panzer motorized 12
Battleship navy motorized 256
The spreadsheet will not have more than 50 lines and I want to be able to increase columns in the future.
Tried a couple approaches from similar situation I found here but due to lacking skills I failed to access any values from the nested table. I think this is because I don't fully understand how the tables structure are after reading each line from the csv file to the table and therefore fail to print any values at all.
If there is a way to get the name,class,type,power from the table and use that line just as my old simple tables, I would appreciate having a educational example presented. Another approach could be to declare new tables from the csv that behaves exactly like my old simple tables, line by line from the csv file. I don't know if this is doable.
Using Lua 5.1
You can read the csv file in as a string . i will use a multi line string here to represent the csv.
gmatch with pattern [^\n]+ will return each row of the csv.
gmatch with pattern [^,]+ will return the value of each column from our given row.
if more rows or columns are added or if the columns are moved around we will still reliably convert then information as long as the first row has the header information.
The only column that can not move is the first one the Name column if that is moved it will change the key used to store the row in to the table.
Using gmatch and 2 patterns, [^,]+ and [^\n]+, you can separate the string into each row and column of the csv. Comments in the following code:
local csv = [[
Name,class,type,power
Infantry,army,human,2
Cavalry,panzer,motorized,12
Battleship,navy,motorized,256
]]
local items = {} -- Store our values here
local headers = {} --
local first = true
for line in csv:gmatch("[^\n]+") do
if first then -- this is to handle the first line and capture our headers.
local count = 1
for header in line:gmatch("[^,]+") do
headers[count] = header
count = count + 1
end
first = false -- set first to false to switch off the header block
else
local name
local i = 2 -- We start at 2 because we wont be increment for the header
for field in line:gmatch("[^,]+") do
name = name or field -- check if we know the name of our row
if items[name] then -- if the name is already in the items table then this is a field
items[name][headers[i]] = field -- assign our value at the header in the table with the given name.
i = i + 1
else -- if the name is not in the table we create a new index for it
items[name] = {}
end
end
end
end
Here is how you can load a csv using the I/O library:
-- Example of how to load the csv.
path = "some\\path\\to\\file.csv"
local f = assert(io.open(path))
local csv = f:read("*all")
f:close()
Alternative you can use io.lines(path) which would take the place of csv:gmatch("[^\n]+") in the for loop sections as well.
Here is an example of using the resulting table:
-- print table out
print("items = {")
for name, item in pairs(items) do
print(" " .. name .. " = { ")
for field, value in pairs(item) do
print(" " .. field .. " = ".. value .. ",")
end
print(" },")
end
print("}")
The output:
items = {
Infantry = {
type = human,
class = army,
power = 2,
},
Battleship = {
type = motorized,
class = navy,
power = 256,
},
Cavalry = {
type = motorized,
class = panzer,
power = 12,
},
}

How to reconfigure path to the data files in Clarion 5 IDE?

There is a problem, the system is written in Clarion 5 came from the past and now it needs to be rewrite in Java.
To do this I need to deal with its current state and how it works.
I'm generate the executable file via Application Generator (\*.APP-> \*.CLW -> \*.EXE, \*.DLL).
But when I run it I get the message:
File(\...\...\DAT.TPS) could not be opened. Error: Path Not Found(3). Press OK to end this application
And then - halt, File Access Error
In what may be the problem? Is it possible in the Clarion 5 IDE to reconfigure the path to the data files?
Generally, Clarion uses a data dictionary (DCT) as the center of persistent data (files) that will be used by the program. There are other ways you can define a table, but since you mentioned you compile from the APP, I'm concluding that your APP is linked to a DCT.
In the DCT you have the declarations for every file your application will use. In the file declaration you can inform both logic and disk file name. The error message says you have a problem in the definition of the disk file name.
The Clarion language separates the logic data structure definition from its disk file. A "file" for a Clarion programm, is a complex data structure, which conforms to the following:
structName FILE, DRIVER( 'driverType' ), NAME( 'diskFileName' )
key KEY( keyName )
index INDEX( indexName )
recordName RECORD
field DATATYPE
.
.
END
END
The above is the basic declaration syntax, and a real example would be like:
orders FILE, DRIVER( 'TopSpeed' ), NAME( 'sales.dat\orders' )
ordersPK KEY( id ), PRIMARY
customerK INDEX( customerID )
notes MEMO( 4096 )
RECORD RECORD
id LONG
customerID LONG
datePlaced DATE
status STRING( 1 )
END
END
orderItems FILE, DRIVER( 'TopSpeed' ), NAME( 'sales.dat\items' )
itemsPK KEY( orderID, id ), PRIMARY
RECORD RECORD
orderID LONG
id LONG
productID LONG
quantityOrdered DECIMAL( 10, 2 )
unitPrice DECIMAL( 10, 2 )
END
END
Now, with the above two declarations, I have two logic files that reside in the same disk file. This is a capability offered for some file drivers, like the TopSpeed file driver. It is up to the system designer to decide if, and which files will reside in the same disk file, and I can talk about that on another post, if it is the case.
For now, the problem may be arising from the fact that you probably didn't change the NAME property of the file declaration, and the driver you're using doesn't support multi-file storage.
Here's a revised file definition for the same case above, but targeting a SQL database.
szDBConn CSTRING( 1024 ) ! //Connection string to DB server
orders FILE, DRIVER( 'ODBC' ), NAME( 'orders' ), OWNER( szDBconn )
ordersPK KEY( id ), PRIMARY
customerK INDEX( customerID )
notes MEMO( 4096 ), NAME( 'notes' )
RECORD RECORD
id LONG, NAME( 'id | READONLY' )
customerID LONG
datePlaced DATE
status STRING( 1 )
END
END
orderItems FILE, DRIVER( 'ODBC' ), NAME( 'order_items' ), OWNER( szDBconn )
itemsPK KEY( orderID, id ), PRIMARY
RECORD RECORD
orderID LONG
id LONG
productID LONG
quantityOrdered DECIMAL( 10, 2 )
unitPrice DECIMAL( 10, 2 )
END
END
Now, if you pay attention, you'll notice the presence of a szDBconn variable declaration, which is referenced at the file declarations. This is necessary to inform the Clarion file driver system what to pass the ODBC manager in order to connect to the dabase. Check Connection Strings for plenty of connection strings examples.
Check the DCT definitions of your files to see if they reflect what the driver expects.
Also, be aware that Clarion does allow mixing different file drivers to be used by the same program. Thus, you can adapt an existing program to use an external data source if needed.
Here is a complete Clarion program to transfer information from an ISAM file to a DBMS.
PROGRAM
MAP
END
INCLUDE( 'equates.clw' ) ! //Include common definitions
szDBconn CSTRING( 1024 )
inputFile FILE, DRIVER( 'dBase3' )
RECORD RECORD
id LONG
name STRING( 50 )
END
END
outuputFile FILE, DRIVER( 'ODBC' ), NAME( 'import.newcustomers' ), |
OWNER( szDBconn )
RECORD RECORD
id LONG
name STRING( 50 )
backendImportedColumn STRING( 8 )
imported GROUP, OVER( backendImportedColumn )
date DATE
time TIME
END
processed CHAR( 1 )
END
END
CODE
IF NOT EXISTS( COMMAND( 1 ) )
MESSAGE( 'File ' & COMMAND( 1 ) & ' doesn''t exist' )
RETURN
END
imputFile{ PROP:Name } = COMMAND( 1 )
OPEN( inputFile, 42h )
IF ERRORCODE()
MESSAGE( 'Error openning file ' & inputFile{ PROP:Name } )
RETURN
END
szDBconn = 'Driver={{PostgreSQL ANSI};Server=192.168.0.1;Database=test;' & |
'Uid=me;Pwd=plaintextpassword'
OPEN( outputFile, 42h )
IF ERRORCODE()
MESSAGE( 'Error openning import table: ' & FILEERROR() )
RETURN
END
! // Lets stuff the information thatll be used for every record
outputFile.imported.date = TODAY()
outputFile.imported.time = CLOCK()
outputFile.processed = 'N'
! //arm sequential ISAM file scan
SET( inputFile, 1 )
LOOP UNTIL EOF( inputFile )
NEXT( inputFile )
outputFile.id = inputFile.id
outputFile.name = input.name
ADD( outputFile )
END
BEEP( BEEP:SystemExclamation )
MESSAGE( 'File importing completed' )
Well, this example program serves only the purpose of showing how the different elements of the program should be used. I didn't use a window to let the user track the progress, and used Clarion's primitives, like ADD(), which work for sure, but inside a loop can represent a drag in performance.
Much better would be to encapsulate the entire reading in a transaction opened with outputFile{ PROP:SQ } = 'BEGIN TRANSACTION' and at the end, issue a outputFile{ PROP:SQL } = 'COMMIT'.
Yes, trhough the PROP:SQL one can issue ANY command accepted by the server, including a DROP DATABASE, so it is very powerful. Use with care.
Gustavo

Resources