We're developing an application based on Yocto, distro Poky 1.7, and now we've to implement the logger, so we have installed the one already provided by our meta-oe layer:
Syslog-ng 3.5.4.1
libdbi 0.8.4.1
libdbi-drivers 0.8.3
Installation has been done without any problems and Syslog-ng can run correctly, except that it doesn't write to an existing sqlite database.
In the syslog-ng.conf file there is just one source, default unix stream socket /dev/log, and one destination, a local sqlite database (of just 4 columns). A simple program that writes 10 logs with the use of the C API 'syslog()' is used for test purposes.
If the database already exists, empty or not, when running the demo program, no log message is written into the database;
If the database doesn't exist, Syslog-ng creates it and is able to write log message until the board is rebooted. After that, we fall back into condition one, so no more log message could be save into the db.
After some days spent on this issue, I've found that this behaviour could be due to this sql statement (in function afsql_dd_validate_table(...) in afsql.c ):
SELECT * FROM tableName WHERE 0=1
I know that this is a useful statement to check existance of the table called 'tableName', and the WHERE 0=1 a always false condition to avoid to parse the whole table.
Enabling Syslog-ng debug it seems that previous statement doesn't return any information about columns, so Syslog-ng think they don't exist and try adding them, causing an error since they already exist. That's why it doesn't write anything to the database.
Modifying the sql query with this one
SELECT * FROM tableName
I'm still unable to write any log message to the database if it is empty, but now it's possible to make all working in the right way if when the database is created a dummy record (row) is added.
But this should not be the right way to work, has anyone faced thi issue and found a solution on how to make Syslog-ng logging with empty sqlite database?
Many thanks to everybody
Regards
Andrea
Related
I am trouble shooting an error in a package.
Update MYTABLE for MYCOLUMN (REF to task name):Error: Executing the query "..." failed with the following error: "Invalid column name 'MYCOLUMN'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
I have verified the table and column exists, the length of the field is way excessive than what it needs that is 14 where it is declared as varchar(250).
I have verified the script works on the server in SSMS outside of the context of the package.
I have verified the connection and database in the package is as I expect.
Is there away to verify on the server. I did try to look at the Connection Managers tab on the package configuration itself i.e. in the Integration Services Catalogs->SSISDB->solutionfolder->..->package.dtsx->Configure context menu but it is empty.
Any ideas on how to troubleshoot?
Just to add more context the package contains 27 other tasks, 9 tasks in a row linked to this task but all set to on completion, all seem to be doing stuff independent of the other. 1 task is a loop doing stuff and the rest are single independent tasks. So I don't know at this stage if it is a cascading connection issue perhaps however; I am just reading what the log says.
I kicked off the package at 9:54am, the timestamp on the error log says 11:45am so nearly 2 hours into running is this log reported.
I would suggest the below things to troubleshoot the issue.
I would suggest you to just have this task and disable all other
tasks to troubleshoot the issue. So that you can focus on this issue
specifically. That will tell you whether connection is working fine
without issues.
I would suggest you to edit the task and see whether parameters are
set properly. Different providers have different way of setting
parameters. Again check whether parameters are proper. Execute SQL
Task
one more thing, may be you are pointing the package to different
connection than the one you used for SSMS. So, it is working in SSMS
and in the connection being used in the package is not having schema
changes yet done.
I finally figure it out before I read the previous offered suggestion so will give some credit if I can! FYI: We have a lot of dev servers. I clicked on the overview hyperlink in the All Execution logs and it said another server. Also I found the connection on the job calling the package not the package itself so I have learnt something today. Anyhow the job said one server but the overview said another so I again I was back to square one scratching my head.
Then I decided to open the connection manager on the job and select the field and make no change rather then cancelling I clicked ok not thinking about it and noticed the field changed to bold face. So I am assuming if you make a manual change on the server in SSMS to anything it shows up in bold which is kind of useful. So I can only assume this is a MS SSMS or SSIS or VS deployment bug. That it does not overwrite, the previous connection although the SSMS interface says otherwise. Perhaps somebody can share some light. Having not checked the server before I made a change and deployed it I have no idea if the previous settings were changed manually by someone or the connection in the package was changed and deployed. Anyhow checking the job history shows it had been failing for awhile so it wasn't me so whoever and whenever a change was done by a previous developer didn't figure it our either or didn't bother or did not know how, or didn't observe it. Anyhow it is pointing to the correct server now!!!
I have a bunch of legacy access based databases that I've been using for years without issue - queries have been running between them for years using ODBC/DAO/ADO. Now suddenly in the last few days, I've started getting the "The database has been placed in a state by user...." error on a bunch of them.
I have tried to narrow the problem down, but it seems to be getting worse. I have tried making a local copy of the database file, opening it, and then on the same machine, trying to create an ODBC connection to it, and get the error. I have also tried running successive queries on the database and still get the same thing (copy of the file on my local machine, so there is only my single connection, basically connect to the database, run a query, close the connection, wait 2 minutes, then try to open a new connection - FAIL - so it is definitely not a multi user limit problem or anything like that.
The issue is consistent across multiple platforms (directly in MS Access (2010 and 2013), with Excel (2010 and 2013) queries to the Access DB, and with Windows Forms VB.net applications trying to query the access DB (through datasets, OLEDB, and ADO)
Until this week all of these applications were working as designed and had been for years- I am the only Dev working on this stuff, so I know that nothing in the programming has changed, so it must be an external issue.
The back end databases reside on a shared server drive (server is running Windows Server 2008) - and we have had no other connection issues to the server or network; it is limited to connections to access database files.
Does anyone know if something has changed lately (in the last week or so) with the ODBC drivers? Maybe an MS update?
Thanks in advance!
It seems that you can fix this issue by buffering the Access binary. Use the Binary.Buffer function in a query that defines your Access database, then reference that query in order to use the binary in a query that pulls each table. Note: I also define parameters for my folder path and file names.
For example:
//myDbBinary
let
Source = Binary.Buffer(File.Contents(DataFolder_param & FileName_param),
[CreateNavigationProperties=true]))
in
Source
// Table1 Query
let
Source = Access.Database(myDbBinary, [CreateNavigationProperties=true]),
_Table1 = Source{[Schema="",Item="Table1"]}[Data]
in
_Table1
The source is this
I am working on an ios project that has a Sybase (ultralite) database that is synchronized with a Sybase Sql Anywhere 12 database using mobilink.
Everything was properly, until i decided today to add some fields to the main database so that they synchronize to the main database.
I have updated the schema of the consolidated database from the main engine, then i have updated the schema of the remote database from the consolidated engine, and then i mapped the added fields together, and I deployed a new ultralite database.
Please note that it's not the first time I do a similar task, i always add fields, and sync databases..
after the update, when i synchronize using the blank ultralite database, mobilink will fail giving only this error: Synchronization Failed: -1305 (MOBILINK_COMMUNICATIONS_ERROR) %1:201 %2: %3:0
I have researched Error Number 201 in sybase and it points to: SQLE_NOT_PUBLIC_ID
and in the sybase documentation the error's probably cause is:
"The option specified in the SET OPTION statement is PUBLIC only. You cannot define this option for any other user."
I have tried to redeploy, I have tried to move the engine to a windows pc, all give the same error.. and i have no clue where this SET OPTION statement came from and how can i solve it..
Any hints are appreciated!
The problem was just caused by small network timeout value while setting up mobilink parameters.
info.stream_parms = (char*) #"host=192.168.0.100;port=3309;timeout=1"
i just changed the value from timeout=1 to timeout=300 and it worked!
I followed a tutorial on the internet to create my own database. I succesfully built a program upon it. Then I created an access .mdb file(another database) and then I just changed the database which the program connected to, to the one which I created.
I just made that one change. But then it started showing me error whenever I tried to update using
da.update(ds,"Phone Book")
where da is data adapter and ds is data set.
The error was: " syntax error in INSERT INTO statement"
I have just changed the DB that the program is connecting to. I did not change the code one bit.
EDIT: I forgot to mention, I searched for this on google, and one thing which I read was, that access database might be only read only or something. But I unchecked the read only box, so I don't know whether it still might be the problem. Although, I don't think there is a problem with the code
EDIT: I just discovered now, that even if I change the table which is being referred to, it throws up the same error.
It sounds like the first database probably used something like Sql Server Express. That's a completely different kind of database then Access, with a different providers, different dialect of SQL, connection string, etc. Why would you think you can change all that without breaking some of your code?
I am using SQL Server for my web application. How will I know that an insert query failed because the database server memory disk is already full
The error code you will get back will indicate that the disk is full: 1105 (primary filegroup full) or 9902 (log file full)
You can simulate this by disabling the auto-grow feature on the database (It's a checkbox in the database properties on the file tab) and filling up the database. The error will be the same.
ALTER DATABASE YourDatabase
MODIFY FILE (name='YourFile' MAXSIZE=50MB);
if you want to find you memory usage
exec sp_spaceused
This will give you how much memory you are used for particular database
Check the error code you get back from SQL Server when you try to insert into the database.
With that error given back you can then decide what to do. (e.g. Try to insert again,
Try to free up some memory on the server) Also if you havent already, place your Insert statement inside a Transaction so that you can rollback if an error occurs.
I suppose you can believe that if the disk is full the SQL server will return the error code :).
You can make your testing code think it is communicating with the SQL server but instead it will talk to some fake object of yours that will respond with the error codes you want to test.
There are frameworks that can help you. One of them is Rhino Mocks you can download from http://ayende.com