Uploading Databases - database

How does one go about uploading a database like Apache Cassandra after creating one? Furthermore, is there a way to upload/share only its skeleton structure, without the data gathered in it? I'm on MacOS and would like to use Python to do all of this. Thank you!

Based on your second comment, I guessed it to mean you want the database to be remotely accessible to clients/apps not installed locally.
Clients/apps connect to Cassandra on the IP address set for rpc_address and the CQL port set for native_transport_port (default is 9042) set in cassandra.yaml.
You mentioned that your Cassandra instance is running on your laptop so only clients/apps running on your local network can access it if you configure rpc_address to an IP address accessible on the network (default is localhost).
If you're just trying out Cassandra and want to collaborate with other developer friends, try Astra and launch Cassandra instance on the free-tier (no credit card required). With it you can share the database credentials with your friends and they can connect to it over the internet.
You can connect to Astra from your Python app using the Python driver. Otherwise, Astra includes Stargate.io pre-configured and ready to use. Stargate is a data access gateway that lets you connect to Cassandra from your app using REST API, GraphQL API or JSON/Doc API without having to learn CQL. For more info, see Connecting to your Astra database. Cheers!

Related

How to make my Neo4j db available from other internet?

I have a constructed DB that have a structure I need. I works with this by using Neo4j Desktop. Now I've deployed an app what works with my db in local network, but can't work from eternal networks. Now I need to fix it but I can't find any info about that.
When I've tried to connect to db via address http://localhost:PORT from my phone connected to same network I didn't got anything related to db.
I tried to add some settings in neo4j.conf file such as org.neo4j.server.webserver.address=0.0.0.0 and dbms.connector.http.address=0.0.0.0:7474.
Also I have a DDNS but I don't know how can I use it to connect to my db.
I'm expecting that I can connect to my db from any network.

Is connecting to my SQL server by IP on the server computer different to other computers for testing purposes?

I have a VB.NET application that utilises databases in an SQL server. I am currently testing the application on the same computer the server is hosted on.
I connect to the server through the following connection string...
("Data Source = " & Master.CurrentIP.Text & ",1433;Network Library=DBMSSOCN;Initial Catalog=ExcelDM;User ID=" & Master.CurrentUser.Text & ";Password=" & Master.CurrentPass.Text & ";")
"Master.CurrentIP.Text" refers to my public IP address and not my computer's.
Basically, everything works perfectly when I test the application on this computer. I am wondering if I can use this as a test for other computers joining or not. Should I host my server on something that isn't my computer?
To clarify, remote connections is enabled on the server and port forwarding (port # 1433) is open both incoming and outgoing through windows firewall and my router port forwarding settings. All TCP/IP options are open in the SQL configuration manager etc.
Based on your comments, I'd make the following assumptions:
You aren't holding any sensitive data, so security isn't a major concern
You are going to be running this on a LAN (local area network) and not over the web
If that's the case I'd suggest the following:
You are fine testing on your local machine - the connection will work the same over any protocol on local or remote, and given the small amount of data in a D&D campaign, you probably aren't going to be worried about performance even if your application is very chatty with SQL server
Put your connection information in the application configuration file, this is supported in .NET framework with some helper types like ConfigurationManager where you can access connection strings like so:
Config file:
<connectionStrings>
<add name="MyConnection" providerName="System.Data.SqlClient" connectionString="server=somehostname;database=Dungeons;uid=user;password=password" />
</connectionStrings>
c# code
string connectionString = ConfigurationManager.ConnectionStrings["MyConnection"];
See here for more details:
https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/connection-strings-and-configuration-files
Since your friends probably don't want to mess with your SQL server and you are probably not joined to a windows domain, I'd say you are fine with putting secrets (user/pass) in the connection string in the configuration file
I'd not bother with what I said about Windows security - basically the users on the client machines would be used as credentials to the SQL database, this would be a bit more of a headache to configure if you aren't all joined to a domain rather than just embedding a SQL user/pass in the config
** Edit: **
Further to conversation, if you are writing an app that clients will be accessing over the web, using a direct SQL connection is not usually the best idea, but it can work if you can manage your clients/IPs.
Generally, opening your SQL server up to the internet is just asking to be attacked - and unless your SQL server is up to date, this can lead to the host machine being compromised.
At best it's an inconvenience, but if you are using that machine for anything other than D&D data, then you probably don't want someone snooping around on it.
In the case that you don't want to change your application architecture
You can whitelist your clients in SQL server/on the firewall. Since it's only friends (let's say 10-20 people?), you can manage their IPs without too much trouble.
This prevents the general internet from being able to access your server.
You could also use a VPN (either software or on your hardware if your router supports it). This also has the effect of putting your clients on your LAN essentially, removing the need for any firewall config apart from the VPN itself.
In the case you are interested in changing your app architecture
You can use a service based approach. This is what is generally used to secure web-based services - .NET framework supports this with WCF (Windows Communication Foundation).
This allows you to define service contracts that your server/client can adhere to.
The communication protocol/method itself is decided via configuration, so you can change what mechanism is used to communicate between client/server after-the-fact without having to change your application code.
This does require you to write a service layer though - you won't be able to directly access SQL from your client, but it could be a useful learning experience, especially if you are interested in doing work like this in the future.
Read about WCF here:
https://learn.microsoft.com/en-us/dotnet/framework/wcf/whats-wcf
There's also the REST based approach which sits down at the HTTP level, .NET framework can support this via ASP.NET web API.
https://dotnet.microsoft.com/apps/aspnet/apis
... so in short, there are a few options

How to connect sql server with swift

I'm working now on an application for iOS (using swift), the database is already exist in SQL Server.
How I will use it and connect with it? Do i need a web service to do that?
thanks all .
It is recommended to use a web service since having the application talk directly to the database means you need to include the SQL Credentials in the binary and anyone with a copy of the application can get them and do whatever they wish in the database. From a security point of view, this is bad.
The correct approach is to have a web server which will host an "API" -- a web application that will receive HTTP requests from the app and translate them to database queries and then will return the response in another format, such as JSON.
However, you need to be careful. This web services must use HTTPS and must first validate the input in order to protect against attacks such as SQL Injection.

Remote Postgres Database Heroku Connection is slow from Digital Ocean Instance

I am using Apache2 and php 5.6.,12. I decided to host my database remotely at Heroku(Using postgresql 9.4) and keep my server at Digital ocean.
In my yii 1 framework, the connection string that I have added is the following:
'db'=>array(
'connectionString' =>
'pgsql:host=ec2-XX-XX-XX-XX.compute-1.amazonaws.com;port=6372;dbname=dddqXXXXX;sslmode=require',
'emulatePrepare' => true,
'username' => 'XXXX4dcXXXX',
'password' => 'XXXXXXXXXc34XXXXXXX123',
'charset' => 'utf8',
),
The connection is successful but remote access is making it slow for even simple query in my server at digital ocean. I read from Heroku that for remote access, ssl mode has to be enabled. So I did and and I am still unable to figured out why the database connection is slow. It can be slow up to even 5 seconds. I tried with a locally installed postgresql database server and everything is running as expected. I am not sure how can I solve this else I will have to move away from Herokku and do it in the traditional way which is going to be very depressing. I hope that someone can help me.
Here is my php info og pgsql:
Is there some settings that need to be done to speed up remote heroku database access in apache2 or php?
I was unable to ping Postgres Heroku Server as advised by Richard (Heroku has prevented pinging) . It was very obvious that connection between digital ocean server and Heroku Postgres server is slow. Thus I emailed Heroku directly to ask for their advice.
Heroku's Solution:
They claimed that applications which are connecting from long distance outside the Heroku platform will have initial connection latency and this latency is a big problem.
Thus, Application has to establish a TCP connection which Postgres protocol will upgrade that to an SSL connection. This takes quite a few packets and introduces a lot of latency, particularly if the app is creating a new connection for each query or page load.
Heroku recommended me to configure the app to use something like heroku-pgbouncer connection pool. That uses pgbouncer and stunnel to provide a configurable connection pool for the app endpoints.
The recommendation sound too expensive and highly challenging for me to deal with.
My Solution :: Use Database Labs
I found out another postgres as a service provider called Database Labs . They allow users to select data center region for better performance.Database Labs has easy backend managing platform and friendly support team. The backend had minimum backend functionality and I do understand as they started in year 2014.
However, after migrating to their service, the performance of my web page improved remarkably. The connection was like any standard connection without the need for SSL. I am inputing my solution for the benefit of others who could face similar problem like me.
Heroku is definitely a good provider if we host our application in Heroku and use their database service. However If you are a Digital Ocean user, I recommend that you use Use Database Labs . This saves a lot of time
There isn't really a question here exactly, so this answer is more a guide to how to test the situation.
If you don't know enough to run a packet trace, you probably want to make sure your servers are all on the same network. However, try logging in to your Digital Ocean server and just ping the Heroku one. Repeat for www.google.com and compare the times. That's assuming the Heroku server responds to pings.
You should be able to connect with "psql -h ...". Then you can run a "SELECT count(*) FROM " then "SELECT * FROM LIMIT 10000", then "LIMIT 20000". That will let you figure out how much time is spent just transferring data vs running the query.
It might just be that the connection between your servers is very slow. Can't say without testing.

Connecting to Google Cloud SQL from Eclipse Not Using App Engine

We are trying to connect to Google Cloud SQL from Eclipse using the Database Development perspective. To do so I'm trying to add a new Database Connection, which I was able to do successfully for a local MySQL instance running on my machine.
The motivation for doing this is that we currently run our JUnit tests against the local instance. However, we are switching to Hibernate and want to make sure that all of our configuration files work with Cloud SQL. As a general guide I've been using:
https://developers.google.com/appengine/articles/using_hibernate
We're diverging slightly in that we're using hibernate.cfg.xml instead of persistence.xml, but I don't think this will actually have a bearing on the current issue of simply connecting to the database. From another answer as well as some Google documentation I'm aware that I can't use the com.google.appengine.api.rdbms.AppEngineDriver, because that needs to be run from an AppEngine instance. Instead I'm trying to follow the directions here:
https://developers.google.com/cloud-sql/docs/external
and am using com.mysql.jdbc.Driver.
I have assigned my Cloud SQL instance an ip address and have added my current ip address to the whitelist, as described here:
https://developers.google.com/cloud-sql/docs/access-control#appaccess
My driver is the Connector/J driver I've been using successfully with the local instance, and the url I'm using is:
jdbc:google:rdbms://my-app:my-cloud-sql-instance/myDatabase
which I got based on:
https://developers.google.com/appengine/articles/using_hibernate
After adding the connection and setting the information I click Test Connection, which worked successfully on my local instance. However, this throws the following error:
java.lang.Exception: Connection failed with unspecified error.
at org.eclipse.datatools.connectivity.DriverConnectionBase.internalCreateConnection(DriverConnectionBase.java:110)
at org.eclipse.datatools.connectivity.DriverConnectionBase.open(DriverConnectionBase.java:54)
at org.eclipse.datatools.connectivity.drivers.jdbc.JDBCConnection.open(JDBCConnection.java:73)
at org.eclipse.datatools.enablement.internal.mysql.connection.JDBCMySQLConnectionFactory.createConnection(JDBCMySQLConnectionFactory.java:28)
at org.eclipse.datatools.connectivity.internal.ConnectionFactoryProvider.createConnection(ConnectionFactoryProvider.java:83)
at org.eclipse.datatools.connectivity.internal.ConnectionProfile.createConnection(ConnectionProfile.java:359)
at org.eclipse.datatools.connectivity.ui.PingJob.createTestConnection(PingJob.java:76)
at org.eclipse.datatools.connectivity.ui.PingJob.run(PingJob.java:59)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:53)
Obviously this isn't very helpful.
I've tried fiddling with the url, tried a number of users (none of which require passwords, so I'm leaving the password fields blank), and different versions of the driver for different versions of MySQL. Nothing has worked.
There are perhaps more deep-seated issues with doing it this way, such as how I will easily switch between test and deployment versions of my hibernate.cfg.xml, and I don't have good answers. I was just planning on editing them by hand back to the AppEngineDriver, which means I might run into further configuration issues at that point even if the JUnit tests are passing. Nevertheless, I think getting a connection set up to Cloud SQL that will allow JUnit testing will be a step in the right direction. I'd appreciate any input!
You should use jdbc:mysql://<cloudsql-instance-ip>:3306/<database-name> to connect from an external network. The connection string you are using is to connect from Google App Engine.

Resources