I have an EC2 box which I SSH to often. I want to install a redis database in the box and query that frequently. Unfortunately, the code to store values in the database can not be run on the box since it gets 'killed'.
So I have been thinking of running the code on my personal laptop and storing the values to the Redis database on the box. So I need to query the database in the box again and again which is not on my laptop but that is on EC2.
Appreciate your help and time.
Thanks!!
Related
I have several VMs in which the developers have access to one specific database they need for their project. (Mostly as a backend for Typo3). Now, some of them have installed Matomo in the Typo3-Database. This is not a clean solution - if they had asked me I would have created a separate database just for Matomo. The problem is that Matomo installations grow and with it the automysqlbackups until the complete filesystem is filled with Matomo-Backups.
I want to move all tables beginning with matomo to another existing but empty database which is called Matomo. I have tried the userfriendly way with Dbeaver and other tools but I guess that my solution is simply a line of SQL from someone with more database experience then me. After that the Matomo data would be dumped to a separate database and I could limit the number of stored backups with some simple housekeeping scripts without affecting long-term storage for the typo3-backend.
FYI: I am doing this on Debian Bullseye 11 which comes with MariaDB 10.5
Bye
Stefan
I have just started using TB for about a week. I have installed TB in Ubuntu 20.04 with postgresql DB. Created dashboard with dummy data and also real data from sensor attached to Raspberry Pi.
The dashboard is showing telemetry data fine. But my question is how to store these data to Postgresql DB, tables? Seems there is option in Rule Engine to direct the data to DB, but how the Rule Engine node will identify which device is sending what data. There could be many devices under one customer/tenant. And in DB, which table? Would like to see past data from DB (say months or years).
Searched net for sometime, but didn't get any concrete information. Any help would be very much appreciated.
Thanks in advance.
Scenario:
I built original Vagrant box, which got installed to several computers. Git tracks changes to the code, but when someone locally collects data to the database (say mysql), or changes schema, I'd like to somehow automate the task of keeping all vagrant boxes "updated" with database changes.
Same question goes for installing new packages/software locally on vagrant box, and then distributing those changes to other installations. I did this with custom shell script that executes on vagrant up, but it seems like a wrong way to do the same for the database (dump, drop, import).
Can it be done with some provisioning tool? All the google results are pointing to initial building of a box, and provisioning; no luck in finding out how to deal with local changes...
I have a client who owns a business with a handful of employees. He has a product website that has several hundred static product pages that are updated periodically via FTP.
We want to change this to a data-driven website, but the database (which will be hosted at an ISP) will have to be updated from data on my client's servers.
How best to do this on a shoestring? Can the database be hot-swapped via FTP, or do we need to build a web service we can push changes to?
Ask the ISP about the options. Some ISPs allow you to ftp upload the .mdf (database file).
Some will allow you to connect with SQL management studio.
some will allow both.
you gotta ask the ISP.
Last time I did this we created XML documents that were ftp'd to the website. We had an admin page that would clear out the old data by running some stored procs to truncate tables then import the xml docs to the sql tables.
Since we didn't have the whole server to ourselves, there was no access to SQL Server DTS to schedule this stuff.
There is a Database Publishing Wizard from MS which will take all your data and create a SQL file that can then be run on the ISP. It will also, though I've never tried it, go directly to an ISP database. There is an option button on one of the wizard screens that does it.
it does require the user to have a little training and it's still a manual process so mabe not what you're after but i think it will do the job.
Long-term, building a service to upload the data is probably the cleanest solution as the app can now control it's import procedures. You could go grossly simple with this and just have the local copy dump some sort of XML that the app could read, making it not much harder than uploading the file while still in the automatable category. Having this import procedure would also help with development as you now have an automated and repeatable way to sync data.
This is what I usually do:
You could use a tool like Red-Gate's SQL Data Compere to do this. The tool compares data between two catalogs (on same or different servers) and generates a script for syncing them.
Hi everyone I hav a small problem in uploading my database. I have created a localhost website on my pc for a vehicle tracking system and now i have no clue in uploading it. It's got two Microsoft Access databases in my pc which is used in the website and they get updated at very regular intervals(almost every second) it has to be uploaded to the web real time. Right now I use ODBC on a localhost..
Does anybody have any idea how to do it?
Please help if so...
Depending on your traffic using ACCESS in a webserver multi user environment will be a real pain. (File in access, etc). Perhaps try to build a webservice to make changes directly on the server?
If you don't want to use ODBC you may have a look at ADO connectionstrings (www.connectionstrings.com is a goot starting point).
I would concur with #Sascha I wouldnt even bother wasting the time trying to run your site with access.
Depending on your host you should have access to a free mysql or mssql database. Use this instead. Write a new page that takes parameters and writes them to your online database, that way you can set up a relay on your machine that pushes the changes from your local machine to the web.
This is definitely not easy, but it can be done. You would need to run a SQL Server database on the web server, and then push the data from Access to SQL Server, or pull it from SQL Server.
We've got a couple of links talking about it at SQLServerPedia:
How can I synchronize data between MS Access and SQL Server databases?
How can I link a SQL Server database to MS Access using link tables in MS Access?
Again, it's not easy - judging by the way you worded the question, you're not going to like the answers that you'll read about. You may want to bring in someone who's experienced with web-based databases and replication in order to bring you up to speed and set your expectations about how challenging this will be.