So I have this 2 applications connected with a REST API (json messages). One written in Django and the other in Php. I have an exact database replica on both sides (using mysql).
My question is, how can i keep this 2 applications databases synchronized?
In other words, when i press "submit" on one of them, i want that data to be saved on the current app database, and on the remote database for the other app using rest.
Is there a django app that does that? i read about django-synchro but didn't see anything REST related.
And i would like to keep things asynchronous, in other words the user must be able to keep using the app while this process is running on the background and keeping data consistent.
I had a look at celery and redis and it seems like a cron job will do what i need
Related
I'm implementing a Django web service, which is about to have different platform apps,
Reactjs for computers, a swift app for ios, and Kotlin for android devices. the protocol is rest API and perhaps a chat feature included then Django channels are used as well. The data format is JSON. For deployment, I intend to use docker which includes Django, celery, and ReactJS app. And the database is on another separate server which is PostgreSQL. I was thinking to collect some user activity data and some history logs to show the user itself what she/he has done so far. After hours of searching, I came up with Kafka! unfortunately, I have no idea how can I use Kafka and integrate these stuff together and how can I deploy these things. I wish there was a system schema for this specific kind of system that shows what is what and where is what?
Kafka will only integrate your database and Django, with some effort, and ideally a separate Kafka Connect service.
From React (or other clients), you'll need to query some Django API routes which will then query your database. Kafka won't help with your frontend, and isn't really what is exposing the history/activity you're interested in displaying. In other words, you could simply write that to the database, and skip Kafka entirely.
Essentially, you're following the CQRS design pattern if you properly separate Kafka writes from end user / UI reads.
shows what's what and what's where!
Unclear what this means, but data lineage and metadata tools are a whole separate thing. For example, LinkedIn DataHub collects information such as this
I'm working on a web application and I'm having problem accessing the database on server side because there is no user for the DB proxy to map. In other word, I have a method which will start as soon as the application comes online and will call itself every 5 seconds to check for new messages. If it receives a specified message, it then goes to the database and finds whatever it needs. However, accessing database on server side wouldn't be possible because there is no user for the DB proxy to map. So what is a good design pattern for this type of application? Should I need an application account for these type of automation process?
Btw, I'm using Weblogic JPA 2.1 for database stuff.
Thanks in advance.
First of all, what exactly do you mean by "no user for the DB proxy to map"?
I assume, you meant that you don't have a user known by a session who connects to the database?
If yes, you usually wouldn't do that anyway and instead nearly always have a database user for your application. Then, no matter a user triggers a database call by an action or the backend triggers it by some scheduling, it will always be the same user who does it. In your Java EE application, you'd have a datasource containing this user in its configuration and all your application parts use the related entity manager when doing persistent actions or queries.
1- lets say the web app is hosted on some cloud like azure, aws etc.
2- and Let say a user changed his profile details on my web app...
3- i am assuming that the request with the new data will hit one of the servers/VM inside the cloud.
4- Lets say the data has got saved in sql server db hosted on the same server/VM as the request landed on ...
Now the questions ..
What i am really confused about is that ..
1- the data will be saved on one single database in the first place then how it gets synced to other servers (if it happens, i am not sure about this) instantly.. because there is no guarantee that the next request from the same user will land on the same server...
2- and if the above scenario is invalid and there exists a shared database server for all application servers inside the cloud then isn't it going to be useless as ultimately the database server is going to get overloaded ... because all the servers/VMs hosting application will be hitting the same db at once ...
i know it's a wide question and i don't know if i have explained it properly ..
but please ask me about anything which i haven't made clear ..
any help whether it is a good link explaining the insights or a series of Q&A with me, anything would be great.. as i have to design such a mechanism and i couldn't find any standard approaches or that how the market leaders are doing it ...
My question is how do i get information from a server to my iphone app. let's assume I have completed my current project I'm working on that only needs data to be uploaded to my application.
I understand there is a database or server I must create but how do I go about creating or modifying one for my needs.
I mainly want to store login information from one user and allow users to search for people who have entered login information (name) to add to a friends lists within the current app.
i think in your case you can use Django-tastypie for backend will be good choice.since using django you can develop it in quick time and the tastypie has api services which can used easily for retrieval and sending data
you can go through this
http://django-tastypie.readthedocs.org/en/latest/
Take a look at services like Stackmob or Parse. These types of service could make it really easy for you to get the server side part of your application up and running. These services would act as your database and also provide an easy api for you to access the server side pieces.
We have an application that we're deploying on GAE. I've been tasked with coming up with options for replicating the data that we're storing the the GAE data store to a system running in Amazon's cloud.
Ideally we could do this without having to transfer the entire data store on every sync. The replication does not need to be in anything close to real time, so something like a once or twice a day sync would work just fine.
Can anyone with some experience with GAE help me out here with what the options might be? So far I've come up with:
Use the Google provided bulkloader.py to export the data to CSV and somehow transfer the CSV to Amazon and process there
Create a Java app that runs on GAE, reads the data from the data store and sends the data to another Java app running on Amazon.
Do those options work? What would be the gotchas with those? What other options are there?
You could use a logic similar to what App Engine HRD migration or backup tool are doing:
Mark modified entities with a child entity marker
Run a MapperPipeline using App Engine mapreduce library iterating on those entity using a Datastore Input Reader
In your map function fetch the parent entity and serialize it to Google Storage using a File Output Writer and remove the marker
Ping the remote host to import those entity from the Google Storage url
As an alternative to 3 and 4, you could make multiple urlfetch(POST) to send each serialized entity to the remote host directly, but it is more fragile as an single failure could compromise the integrity of your data import.
You could look at the datastore admin source code for inspiration.