How to access mnesia in remote server - database

I have an application built on Erlang/cowboy, the database is mnesia. The node name is webserver#127.0.0.1.
Since there is no GUI in remote server, I want to use local observer to access to the remote mnesia.
I tried many times, but still failed. Can anyone help me out? (Assume the IP of remote server is 10.123.45.67)

Your remote Erlang node name should be webserver#10.123.45.67 instead of webserver#127.0.0.1.
Also you need to set the same cookie for both nodes as well as the same node naming convention. By naming convention I mean short name (-sname flag) or long name (-name flag), becuase a node with a long node name cannot communicate with a node with a short node name.
Note that if your real remote IP is not in a trusted network, it is not good practice to do it in case of security.

Related

Setup VoltDB Cluster IP

Trying with VoltDB cluster, created a cluster of 2 nodes with k=1
Cluster initialization was successful, both the nodes are up.
Now, how do i connect to this cluster, i could not find any documentation to setup single IP for cluster.
Will the client connect to particular node IP or cluster IP ?
I am using VoltDB community edition.
In general, you can connect to one node, or to multiple nodes. For simple usage, one node is fine. For a client application where you want lower latency and higher throughput, you should connect to all of the nodes in the cluster. See Connecting to the VoltDB Database for the java client, and in particular section 6.1.2 on using the auto-connecting client which enables you to connect to only one node and the client will automatically connect to all of the other nodes.
For command-line access, see the sqlcmd reference:
--servers=server-id[,...]
Specifies the network address of one or more nodes in the database cluster. By default, sqlcmd attempts to connect to a database on localhost.
Disclosure: I work at VoltDB.
If you wish to connect to a single node try
jdbc:voltdb://192.168.1.5:<port>
as the connection URL or if you wish to connect to cluster try
jdbc:voltdb://192.168.1.5:<port>,192.168.1.6:<port>,<any additional nodes you might have in your cluster>
as the connection url.

password protection using selenium grid and remote nodes

When using selenium grid with remote nodes, how can I execute commends on the node without passing information in the clear between the grid and the node? I access the site I am testing uses https, so communication between the node and the site is secure, but what about between the hub and the node? Is there any way to secure that? Has anyone tried port forwarding on both the hub and the node?
Thank you. With the help of that link and a little tinkering, I got it to work. In case it helps someone, here is basically what I did. This is the case where I am running the grid on my local machine (at home) and I have nodes running on remote laptops.
Generate an rsa key on the remote machine, and take id_rsa.pub and place it in ~/.ssh/authorized_keys on the local machine running the grid, making sure you have file/directory permissions set correctly
Make sure you have a fixed IP at your local machine, I used the AirPort Utility, under network options, DHCP Reservations. (Info about how to do this is generally easily web-searchable)
Open up port 22 on your local router. I did this using the Airport Utility, network options, Port Settings. At this point you should be able to ssh from the remote machine to the local machine successfully, without using a password.
Start port forwarding on the remote machine, with something like this. ssh -N -L 4444:${HUB_IP}:4444 ${USER_NAME}#${HUB_IP}. Now all data that is sent to port 4444 on the remote machine, will be sent securely to end up on port 4444 on the local machine (this presumes that your grid is set up on 4444)
Start the grid on the local machine, using port 4444
Start the node on the remote machine with the hub setting of -hub http://localhost:4444/grid/register -port {whatever_you_want_for_driver_but_not_4444}
I put this all into a script that runs from the local machine, it calls scripts on the remote machine, so you need to also be able to ssh from the local machine to the remote machine. It is a bit of a hassle to set this up, but once it is done, you can start one script to start the hub and as many nodes as you like.
I think now I can pass information securely between the hub and the nodes.
I have not done this personally, but this link may help you.
For logging into websites, I have usually tried to log in via an API and then insert the cookie into the driver session so logging in was not needed via Selenium.

NodeJS - modifying one object from two different Node servers

Let's say I have an array in a node server at IP xx.xx.xx, let's call it mobile server because only mobile users can access it:
var users = [{username: "jim", stats: "x"}]
Now I have another node server at IP yy.yy.yy, for PC users only.
I want user "jim" to be able to access his user through HTTP requests from his PC and his mobile aswell, but also modify his object whenever he makes changes in one of the devices. is it possible to achieve, performance-wise and security-wise?
Yes, this is possible. You would want both node servers to access the same db for simplicity. You can use any database, nosql (mongodb) or a traditional sql one (mysql, postgresql, etc).

High Availability Groups and Fail Over

I have an active and passive node for HAG's in SQL2012. The passive node is constantly hit when there is a "READ-ONLY" parameter passed with the connection string. So, for example, reports would use this type of connection on an everyday basis.
Recently we had a QA environment setup with HAG's as active-active, so NO passive node. From conversing with the DBA's I was told that the difference between the 2 setups (active-active vs active-passive) is that in a fail over situation the active-active setup would allow "READ-WRITE" connections to continue to work.
In an active-passive fail over situation any "READ-WRITE" connections would not work because the passive DB would only allow "READ-ONLY" type connections. Further, more tools like SSRS would fail because they can only be setup in one node at a time. Currently we only have it installed in the passive node. That doesn't make sense because the passive node is one node, which then means we should be able to install it in the active node. Technically this all sort of makes sense... but then it doesn't.
Isn't one of the main purposes of HAG groups to provide fail over protection regardless of the setup? Can anyone shed light on this?
I think that either you misunderstood your DBAs or they're not correct.
In an availability group, you have three options with regards to how you want secondary nodes to behave (in order from most to least permissive):
Allow any connections
Allow only connections that specify application intent as readonly
Allow no connections
You also have two options for the primary replica (again in order from most to least permissive):
Allow any connections
Allow only connecitons that specify application intent as readwrite
What makes this slightly confusing is that this preference is configured per replica. That is, you could have the following configuration:
Node A
Primary: Accepts any connection
Secondary: Accepts no connections
Node B
Primary: Accepts read-write connections
Secondary: Accepts read-only connections
In a failover situation, the role of the primary node is transferred to another node each replica obeys whatever semantics are configured for it. So, in my example above, if the primary is Node A, any application connecting to it will be accepted while only read-only connections will be accepted at Node B. When a failover happens (making Node B the primary), only read-write connections will be accepted at Node B while no connections will be accepted at Node A. I think to avoid confusion, configuring all of the nodes the same way is best. But talk with your DBAs and ask what each node's behavior is in the primary and secondary roles.

Multiple "Default" instances in SQL Server cluster? (AKA multiple clustered instances without requiring an instance name to connect)

I'm setting up multiple SQL instances on an active/active cluster, and on our existing SQL Cluster, the cluster name is SQLCLUSTER, but we access the instances as SQLCLUSTERINST1\Instance1, SQLCLUSTERINST2\Instance2, etc. Since each instance has its own IP and network name anyway, can I install SQL as the "Default" instance on each network name? I'd really like to access my instances without having to give the instance name (ie, instead of the above, just SQLCLUSTERINST1, SQLCLUSTERINST2, etc), but my understanding of SQL is that, even in a cluster, the instance name is required, even though the IP already uniquely identifies an instance.
Does anybody know if I can do this? I'm about to install the first instance, and I wanted to get an answer to this before I start installing them as named instances if I don't need to. It just seems reduntant, and potentially unnecessary, to have to give the instance cluster name and the instance name to connect to a server when just the instance cluster name would uniquely identify a sql instance as-is. I would expect one default instance per cluster group (as they'd share an IP), but not per cluster.
You can only use default instances in an active/passive cluster. The reason for this is because you cannot have multiple default instances installed on the same server, clustering requires an instance to be installed on each node of the cluster to support fail over.
I ended up finding a work-around for this. While I installed named instances on the cluster, I can access them using port 1433 on each DNS name, so I don't have to provide the instance name to connect, which is what I was after.
To get this accomplished, I have to modify the listener configuration to force each instance to listen on port 1433 on its dedicated IP, rather than just relying on dynamic ports and the SQL Browser.
I've detailed the steps on my blog
Good idea rwmnau. I haven't read your blog post, yet, but I suspect the limitation revolves around registry keys or directory structures. Remember, each node only has one registry hive for SQL Server. There's a registry key that lists the instances on the box. It's a space separated list. I'm pretty sure that list has to have distinct values in it, therefore, you can't have more than one MSSQLSERVER instance. The internal instance name for default instances is MSSQLSERVER. So I think, if nothing else, there's your limitation, there. However, I do think you have a wonderful idea with using port 1433 for all instances involved. Good job and thanks for sharing. I think I might try that myself on my next cluster!

Resources