FDClientDataSet does not present latest data on database table - database

I developed a client/server database.
The server hosts the oracle database, the one I manipulate with the client (in android).
Everything works correctly, but when I insert/delete/modify a record, the database shows all the changes correctly, but the information I have on the FDClientDataSet is the one that does not contain my latest changes. I was doing some workaround like ClientDataset.Active:= False and later setting it to True, that was working but causes an exception when running the app on the android device.
I have tried tons of things, like update, ApplyUpdates methods, etc, but nothing works. Any ideas how to fix that?

Related

WSo2 EMM - App Management Database Bug

Running WSo2 EMM 1.1.0, everything has been working just fine except for one big issue.
From the moment I first click on an app in the App Management tab, the WSO2EMM_DB.h2.db file starts to steadily grow as long as the server is running, even with absolutely no changes. Eventually, it gets so big that clicking an app on that tab takes a ridiculously long time to load the list of devices using the app. We're talking 5+ minutes, it becomes completely unusable. I have checked the error logs and found no errors at all, every time.
Restarting the server does nothing to correct the issue. Even if I click an app on the App Management tab once, and never again, the database file will continue to grow. Even restarting the server and not logging into the EMM page, it will continue to grow.
The only thing I've found so far that can possibly help is keeping backup copies of the database file and overwriting the current file when it gets too big. Obviously that's not a solution, as I'd need to create a new backup file every time there's a change on the server, and eventually the database file would grow too big from that too.
It's not an issue with the H2 database either. Not only have I tried starting over fresh several times and have had the same behavior, but here is the only info I could find regarding this issue, and they were having the issue regardless of whether or not it was on H2 or MySQL.
I've been trying to find a solution for this for over a month with no success. Any help would be appreciated!
EDIT: It looks like this might be the subject of EMM-826. Unfortunately there seems to be no response to that bug report so far.
EDIT 2: EMM-826 was closed with a message saying the following:
This issue is fixed in the EMM 1.1.0 GA latest pack. Please get all the patches for the product/build the product from the latest source [ https://github.com/wso2/product-emm ] and try again.
Unfortunately, that did not work for me. I'm not sure what exactly I'm doing wrong, so I'll list the what I did to try to fix it:
Downloaded the EMM 1.1.0 zip from http://wso2.com/products/enterprise-mobility-manager/.
Downloaded the zip from https://github.com/wso2/product-emm and pasted the files from that into my EMM_HOME directory.
When that didn't work, I searched for patches and found I was only using patches 1-6. In the documentation I found I could download patches 7-12 here. Patches 9 and 10 didn't work right for some reason; causing me not to be able to reach the EMM dashboard or publisher. I could only access the Carbon manager. I was able to make patches 7, 8, 11, and 12 work though - with no change in behavior.
Here are the steps I take to reproduce the issue:
After setting a fresh copy of the EMM up, I log in to the EMM dashboard as Admin, set up a user account, and upload an app through the Publisher.
Register a device to the user account I set up. In this case, an Android device running Android 4.2.2.
From the dashboard, I go to App Management and click the app I uploaded. The list of devices loads, but from that point on, the database file starts growing and eventually, after several hours, becomes so large it the device list will never load.
Please help!
Found this happening also, from a quick look it's the WSO2EMM_DB.notifications table. Seems to keep a history of all notifications over time, and the info for app installs is taken from non-optimized queries, which degrade as the table grows. You 'could' delete all rows from the table, and it will re-populate as devices 'check back' and report their info.
But you'd probably want to write a query to just keep the latest notification of each type of each user (I'll leave that to someone else...) and as was mentioned, it is apparently fixed in the latest version.
Issue appears to be resolved in EMM 2.0, which can be found here.

Change Implementation of Select, Update, and Insert in SQL Server

To give you the question first: I want to know if it is possible to create a stored procedure or something in SQL Server that intercepts and translates SELECT, INSERT, and UPDATE commands. Now for the explanation:
I am writing a web application to replace an old desktop app. Its a business app which is basically a database interface with reports and searches and all the good ol' CRUD. The new and old apps need to live in harmony together since some customers may be using the old and new together to access the same DB.
My problem is that the original database format stores most data in a single blob of text (1 nvarchar(MAX) field). I want to add functionality to search on fields stored in the blob, but it will be cumbersome and slow. I would like to update the database format without changing the desktop app at all, hence the question above.
It occurs to me that I could do this on the client by writing a wrapper class for the data access object and then do a bulk replace in the client code to reference the wrapper, but I want to know what my options are on the server as well.
In case anyone wants to know, the old app is in VB6 and the new in C#.
EDIT
Alright, so it looks like if I do anything on the server side we are looking at adding stored procedures and then updating the client VB6 code to reference the stored procs. Do something like a bulk replace of SELECT with sp_oldselect ... To return the data in a different format. I'm guessing a client-side wrapper would be the best solution for the time-being. Old apps die hard.
You can create a bunch of views for the old client and let it to query those views. It will be slow as hell in most cases, but it can 'replace' the select query. For updates and insert.. well.. instead of triggers on the views could help is some cases, but it will require lots of processing.
However my suggestion is to provide exactly the same functionality in the web app and deprecate the desktop app. When the desktop app's share is low enough, stop supporting it. From this point, you are (mostly) free to add new functions, upgrade the database schema, etc.
I agree with JonH, that alot can go wrong here, but you can try and read up on the INSTEAD OF Triggers in MS SQL server here: https://technet.microsoft.com/en-us/library/ms179288(v=sql.105).aspx

Network access was interrupted

The Access database just needs to be open and it will usually crash within the next 20-40mins, resulting in the following error message:
Your network access was interrupted. To continue, close the database, and then open it again.
More details:
The database is split, with the back end and front end on a server. The computers are then connected to the server via LAN (ethernet).
Although there are multiple computers connected to the server, the database only has one user at a time.
The database has been fine for almost a year, until this week where this error has started occurring.
We never have connectivity issues with the server.
I have seen several answers saying it is:
the databases fault, as it is starting to corrupt
the servers fault, as it broken, dropping my connection briefly
microsofts fault, they should patch it
I am hoping this is a problem with the database itself, as I am not responsible for the server.
Does anyone have a definitive solution?
I have recently experienced the same problem, and it all started when I moved my DB in an extrernal disk. The same db was working just fine in the local disk, or in the previous external disk. So, i am guessing is just a bug that has to do with the disk letter changing or something like this.
The problem sounds like an unstable LAN connection OR changes the LAN location (e.g. new hardware or changs to admin settings) causing increased latency.
If you have forms in the FE bound to BE tables the latency can cause the connection to be severed resulting in the error you see.
I'm not a network admin but the main culprits I've seen are:
Users connecting to the network using a VPN using an unstable connection (cell phones, crappy wifi, or just bad ISP service).
Network admins capping persistent connections to a share causing disconnects.
Unstable network hardware or bad hardware configuration.
"Switching" between wired and wireless LAN connections.
I don't think the issue is the database other than having bound forms to a BE database which is more of a fundemental design problem than anything else.
Good luck!
I use Access 2010. I had the same issue but solved it in the following ways.
On the external data ribbon, go to the Import & link group and click on Linked Table Manager.
Click on select all.
Click on Ok to refresh the links.
In cases where the path of the BackEnd database file has been changed, browse to the new location and select the new path. This will also refresh the links. This will solve the problem. It did for me.
You wrote, "The database has been fine for almost a year, until this week where this error has started occurring."
Clearly something has recently changed for this to be happening and without narrowing the field of possibilities it's anyone's guess. However, in my experience Jet DB crashes when two or more users are accessing and editing the same record(s) at the same time. So, if you've recently added new users this is a possibility.
Note: Jet is a file-server DB not a client server, which means the app was probably designed for a specific number of front-end users. Without knowing more I would start there.
I resolved my issue on this when I figured out that I had a offline directory setup and the sync was having an issue I turned off the sync and tested it and the error went away.

NHibernate will not insert a record

I have an application that is now 4+ years old that is exhibiting some odd behavior on our latest deployment. The application uses nHibernate for all inserts / updates / selects, etc. We are currently using .NET 2.0, and nHibernate 1.2 (I know, we need to upgrade)
This deployment is on Windows 2008 Server x64, IIS 7.5 - what I have seen so far is that the application runs, but is unable to insert or update records in the DB - reads seem fine so far, but writes are a problem. SOME writes actually work, inserts into some small tables, but most never even make it to the DB.
Using SQL Profiler, the insert / updates never make it to the server, and turning log4net up to DEBUG, and show_sql true - the select statements appear, but the insert / update statements never make it into the log at all, and never show up at the server.
What's even more odd is that the application seems to be oblivious to this - the commandandclose runs without exception (open session in view with an httpmodule), the domain objects come back with uuid's generated, etc. but never get persisted.
Certainly an upgrade is due, but I would hate to try it during a deployment, and without time to accurately test the app. Any ideas?
My guess is that the default ISession FlushMode has been changed from Auto to Never or Commit. Never means that the session will flush when Flush() is called by the application; Commit means that the session will flush when a transaction is committed.
Back out your current deployment and return to what you had before. Then look for the mistake someone made. If it used to insert and now does not, then something is wrong with your current code. If it isn't creating the insert/update statments, then I'd look first at where they are supposed to be created. Did the current deplyment actually insert record or update them in dev? Did anybody test that or were you relying onthe fact that it didn;t pop up an error? If it did work in dev and doesn't work in prod, I'd look at the envirnmental differnences between dev and prod.
Both good answers, the problem was in the deployment. The web.config was setup for IIS6, and the deployment to IIS7 did not properly setup the open session in view HttpModule that is used to commit the transaction. Changing the pipeline mode from Integrated to Classic solved the problem.

Can connectionstring cross over to other sites on the same server?

I ran across a new problem in the last week. Due to the nature of my project and available budget a small intranet web application I've been working on is both the testing and live server, as well as serves up the pages and is the sql server. This will last at least until the project is out of the major development cycle. Now that the project has real users but I am continuing development I duplicated the database to have a safe copy to mess with that won't cause havoc to live business data and a development copy of the website.
All was well until I discovered an anomoly on the test copy of the site, anything that uses a sql datasource was properly pulling it's data from the test database, but anything that gets it's data from a stored procedure triggered in the code behind was pulling it's data from the live databse.
My confusion comes from the fact that all stored procedures and sql datasources ultimately point back to the same connectionstring setting in the web.config file to know where to connect to. I just rename the database name depending on if I'm uploading the latest changes to the test or live site.
My question comes down to, why would with one connection string in each site would my test site accessing data one way get it from one database and accessing the other get it's data from the other database?
Here's my connection string they all point back to, names/passwords of course change for obvious reasons, but the structure is intact.
<add name="db_Connection" connectionString="Data Source=SERVERNAME;Initial Catalog=DATABASE_live;Persist Security Info=False;User ID=USERID;Password=password" providerName="System.Data.SqlClient"/>
I added a key to the appsettings to reference the name of the database connection so I could easily change it's name if need be without having to edit dozens of pages for the code behind SProc calls.
<add key="defaultDB" value="db_Connection" />
Am I violating some rule I'm unaware of or is there something else going on that I need to be aware of and change so I can have a true test environment as I continue to develop an active site?
EDIT This project is in ASP.NET 2.0 VB, fixed the code display.
solution found I have tracked down the solution, thanks for the pointers, they got me looking elsewhere. When I copied the site to a different location for testing I forgot to update my appsetting key for the site's location, this caused the following part of the call for stored procedures to grab data from the live site's web.config aparently.
System.Web.Configuration.WebConfigurationManager.OpenWebConfiguration(pubvar_webConfig)
Change the username and password on the dev database. If your problem persists then you might have a connection string set somewhere else that you don't know about.
I would search all of the files in your solution to make sure you don't have one of the database names hard coded some place. Maybe in the designer files?
It may be worth running the two applications in different app pools via IIS (if you aren't already or course!). This should eliminate any concurrency issues between the test and production sites at the application level.
IMHO with a shared test / production environment seperate app pools is good practice at any time.

Resources