Issue with reading data from Apache cassandra - database

I have some trouble using apache cassandra. I have been trying to solve this problem for several weeks now.
This is my setup. I have 2 computers running apache cassandra(lets call the computer C1 and Computer C2), I create a keyspace with replication factor 2. This is so that each computer has a local copy of the data.
I have a program that reads a fairly large amount of data say about 500MB.
Scenario 1)
Say only computer C1 has cassandra is running, I run the read program on computer C1 then this read occurs with half a minute to a minute.
Scenario 2)
I now start the cassandra instance on the computer C2 and run the read program on computer C1 again- it now takes a very long time to complete in the order of 20 minutes.
I am not sure why this is happening. The read consistency is set to "One"
Expected performance
Ideally the read program on both computers C1 and C2 has to complete fast. This should be possible as both computers have a local copy of the data.
Can anyone please point me in the right direction? I really appreciate the help,
Thanks
Update: Network Usage
This may not mean much, but I monitored the internet connection using nethogs and when both cassandra nodes are up, and I read the database, bandwidth is used by cassandra to communicate with the other node - presumably this is read repairs occuring in the background as I've used the read consistency level 'One' and in my case the closest node with the required data is the local computer's cassandra instance (all nodes have all the data) - so the source of data should be from the local computer...
Update: SQLTransentExceptions: TimedOutException()
When both nodes are up, the program that reads the database, however, has several SQLTransentExceptions: TimedOutException(). I use the default timeout of 10 sec. But that raises a question of why the SQL statements are timing out, when all data retrieval should be from the local instance. Also, the same SQL code runs fine, if only one node is up.

There is no such thing as a read consistency of "ANY" (that only applies to writes). The lowest read consistency is ONE. You need to check what your read consistency really is.
Perhaps your configuration is setup in such a way that a read requires data from both servers to be fetched (if both are up), and fetching data from C2 to C1 is really slow.
Force set your read consistency level to "ONE".

You appear to have a token collision, which in your case translates to both nodes owning 100% of the keys. What you need to do is reassign one of the nodes such that it owns half the tokens. Use nodetool move (use token 85070591730234615865843651857942052864) followed by nodetool cleanup.
The slow speeds most likely are from the high network latency, which when multiplied across all your transactions (with some subset actually timing out) result in a correspondingly large job time. Many client libraries use auto node discovery to learn about new or downed nodes, then round robin requests across available nodes. So even though you're only telling it about localhost, it's probably learning about the other node on its own.
In any distributed computing environment where nodes must communicate, network latency and reliability are a huge factor and must be dealt with.

Related

How to synchronize distributed system data across cassandra clusters

Hypothetically speaking, I plan to build a distributed system with cassandra as the database. The system will run on multiple servers say server A,B,C,D,E etc. Each server will have Cassandra instance and all servers will form a cluster.
In my hypothetical distributed system, X number of the total servers should process user requests. eg, 3 of servers A,B,C,D,E should process request from user uA. Each application should update its Cassandra instance with the exact copy of data. Eg if user uA sends a message to user uB, each application should update its database with the exact copy of the message sent and to who and as expected, Cassandra should take over from that point to ensure all nodes are up-to date.
How do I configure Cassandra to make sure Cassandra first checks all copies inserted into the database are exactly the same before updating all other nodes
Psst: kindly keep explanations as simple as possible. Am new to Cassandra, crossing over from MySQL. Thank you in advance
Every time a change happens in Cassandra, it is communicated to all relevant nodes (Nodes that have a replica of the data). But sometimes that doesn't happen either because a node is down or too busy, the network fails, etc.
What you are asking is how to get consistency out of Cassandra, or in other terms, how to make a change and guarantee that the next read has the most up to date information.
In Cassandra you choose the consistency in each query you make, therefore you can have consistent data if you want to. There are multiple consistency options but normally you would only use:
ONE - Only one node has to get or accept the change. This means fast reads/writes, but low consistency (If you write to A, someone can read from B while it was not updated).
QUORUM - 51% of your nodes must get or accept the change. This means not as fast reads and writes, but you get FULL consistency IF you use it in BOTH reads and writes. That's because if more than half of your nodes have your data after you inserted/updated/deleted, then, when reading from more than half your nodes, at least one node will have the most recent information, which would be the one to be delivered. (If you have 3 nodes ABC and you write to A and B, someone can read from C but also from A or B, meaning it will always get the most up to date information).
Cassandra knows what is the most up to date information because every change has a timestamp and the most recent wins.
You also have other options such as ALL, which is NOT RECOMMENDED because it requires all nodes to be up and available. If a node is unnavailable, your system is down.
Cassandra Documentation (Consistency)

What happens when bandwidth is exceeded with a Microsoft SQL Server connection?

The application used by a group of 100+ users was made with VB6 and RDO. A replacement is coming, but the old one is still maintained. Users moved to a different building across the street and problems began. My opinion regarding the problem has been bandwidth, but I've had to argue with others who say it's database. Users regularly experience network slowness using the application, but also workstation tasks in general. The application moves large audio files and indexes them on occasion as well as others. Occasionally the database becomes hung. We have many top end, robust SQL Servers, so it is not a server problem. What I figured out is, a transaction is begun on a connection, but fails to complete properly because of a communication error. Updates from other connections become blocked, they continue stacking up, and users are down half a day. What I've begun doing the moment I'm told of a problem, after verifying the database is hung, is set the database to single user then back to multiuser to clear connections. They must all restart their applications. Today I found out there is a bandwidth limit at their new location which they regularly max out. I think in the old location there was a big pipe servicing many people, but now they are on a small pipe servicing a small number of people, which is also less tolerant of momentary high bandwidth demands.
What I want to know is exactly what happens to packets, both coming and going, when a bandwidth limit is reached. Also I want to know what happens in SQL Server communication. Do some packets get dropped? Do they start arriving more out of sequence? Do timing problems occur?
I plan to start controlling such things as file moves through the application. But I also want to know what configurations are usually present on network nodes regarding transient high demand.
This is a very broad question. Networking is very key (especially in Availability Groups or any sort of mirroring set up) to good performance. When transactions complete on the SQL server, they are then placed in the output buffer. The app then needs to 'pick up' that data, clear it's output buffer and continue on. I think (without knowing your configuration) that your apps aren't able to complete the round trip because the network pipe is inundated with requests, so the apps can't get what they need to successfully finish and close out. This causes havoc as the network can't keep up with what the apps and SQL server are trying to do. Then you have a 200 car pileup on a 1 lane highway.
Hindsight being what it is, there should have been extensive testing on the network capacity before everyone moved across the street. Clearly, that didn't happen so you are kind of left to do what you can with what you have. If the company can't get a stable networking connection, the situation may be out of your control. If you're the DBA, I highly recommend you speak to your higher ups and explain to them the consequences of the reduced network capacity. Often times, showing the consequences of inaction can lead to action.
Out of curiosity, is there any way you can analyze what waits are happening when the pileup happens? I'm thinking it will be something along the lines of ASYNC_NETWORK_IO which is usually indicative that SQL is waiting on the app to come back and pick up it's data.

Coordinator node and its impact on performance

I was studying up on Cassandra and i understand that it is a peer database where there are no master or slaves.
Each read/write is facilitated by a coordinator node, who then forwards the read/write request to the specific node by using the replication strategy and Snitch.
My question is around the performance problems with this method.
Isn't there an extra hop?
Is the write buffered and then forwarded to the right replicas?
How does the performance change with different replication
strategies?
Can I improve the performance by bypassing the coordinator node and
writing to the replica nodes myself?
1) There will occasionally be an extra hop but your driver will most likely have a TokenAware Strategy for selecting the coordinator which will choose the coordinator to be a replica for the given partition.
2) The write is buffered and depending on your consistency level you will not receive acknowledgment of the write until it has been accepted on multiple nodes. For example with Consistency Level one you will receive an ACK as soon as the write as been accepted by a single node. The other nodes will have writes queued up and delivered but you will not receive any info about them. In the case that one of those writes fails/cannot be delivered, a hint will be stored on the coordinator to be delivered when the replica comes back online. Obviously there is a limit to the number of hints that can be saved so after long downtimes you should run repair.
With higher consistency levels the client will not receive an acknowledgment until the number of nodes in the CL have accepted the write.
3) The performance should scale with the total number of writes. If a cluster can sustain a net 10k writes per second but has RF = 2. You most likely can only do 5k writes per second since every write is actually 2. This will happen irregardless of your consistency level since those writes are sent even though you aren't waiting for their acknowledgment.
4) There is really no way to get around the coordination. The token aware strategy will pick a good coordinator which is basically the best you can do. If you manually attempted to write to each replica, your write would still be replicated by each node which received the request so instead of one coordination event you would get N. This is also most likely a bad idea since I would assume you have a better network between your C* nodes than from your client to the c* nodes.
I don't have answers for 2 and 3, but as for 1 and 4.
1) Yes, this can cause an extra hop
4) Yes, well kind of. The Datastax driver, as well as the Netflix Astynax driver can be set to be Token Aware which means it will listen to the ring's gossip to know which nodes have which token ranges and send the insert to the coordinator on the node it will be stored on. Eliminating the additional network hop.
To add to Andrew's response, don't assume the coordinator hop is going to cause significant latency. Do your queries and measure. Think about consistency levels more than the extra hop. Tune your consistency for higher read or higher write speed, or a balance of the two. Then MEASURE. If you find latencies to be unnacceptable, you may then need to tweak your consistency levels and / or change your data model.

What are good algorithms to keep consistency across multiple files in a network?

What are good algorithms to keep consistency in multiple files?
This is a school project. I have to implement in C, some replication across a network.
I have 2 servers,
Server A1
Server A2
Both servers have their own file called "data.txt"
If I write something to one of them, I need the other to be updated.
I also have another scenario, with 3 Servers.
Server B1
Server B2
Server B3
I need these do do pretty much the same.
While this would be fairly simple to implement. If one, or two of the servers were to be down, When comming back up, they would have to update themselves.
I'm sure there are algorithms that solve this efficiently. I know what I want, I just don't know exactly what I'm looking for!
Can someone point me to the right direction please?
Thank you!
The fundamental issue here is known as the 'CAP theorem', which defines three properties that a distributed system can have:
Consistency: Reading data from the system always returns the most up-to-date data.
Availability: Every response either succeeds or fails (doesn't just keep waiting until things recover)
Partition tolerance: The system can operate when its servers are unable to communicate with each other (a server being down is one special case of this)
The CAP theorem states that you can only have two of these. If your system is consistent and partition tolerant, then it loses the availability condition - you might have to wait for a partition to heal before you get a response. If you have consistency and availability, you'll have downtime when there's a partition, or enough servers are down. If you have availability and partition tolerance, you might read stale data, or have to deal with conflicting writes.
Note that this applies separately between reads and writes - you can have an Available and Partition-Tolerant system for reads, but Consistent and Available system for writes. This is basically a master-slave system; in a partition, writes might fail (if they're on the wrong side of a partition), but reads will work (although they might return stale data).
So if you want to be Available and Partition Tolerant for reads, one easy option is to just designate one host as the only one that can do writes, and sync from it (eg, using rsync from a cron script or something - in your C project, you'd just copy the file over using some simple network code periodically, and do an extra copy just after modifying it).
If you need partition tolerance for writes, though, it's more complex. You can have two servers that can't talk to each other both doing writes, and later have to figure out what data wins. This basically means you'll need to compare the two versions when syncing and decide what wins. This can just be as simple as 'let the highest timestamp win', or you can use vector clocks as in Dynamo to implement a more complex policy - which is appropriate here depends on your application.
Check out rsync and how Dropbox works.
With every write on to server A, fork a process to write the same content to server B.
So that all the writes on to server A are replicated on to server B. If you have multiple servers, make the forked process to write across all the backup servers.

simple Solr deployment with two servers for redundancy

I'm deploying the Apache Solr web app in two redundant Tomcat 6 servers,
to provide redundancy and improved availability. At this point, scalability is not a issue.
I have a load balancer that can dynamically route traffic to one server or the other or both.
I know that Solr supports master/slave configuration, but that requires manual recovery if the slave receives updates during the master outage (which it will in my use case).
I'm considering a simpler approach using the ability to reload a core:
- only one of the two servers is receiving traffic at any time (the "active" instance), but both are running,
- both instances share the same index data and
- before re-routing traffic due to an outage, the now active instance is told to reload the index core(s)
Limited testing of failovers with both index reads and writes has been successful. What implications/issues am I missing?
Your thoughts and opinions welcomed.
The simple approach to redundancy your considering seems reasonable but you will not be able to use it for disaster recovery unless you can share the data/index to/from a different physical location using your NAS/SAN.
Here are some suggestions:-
Make backups for disaster recovery and test those backups work as an index could conceivably have been corrupted as there are no checksums happening internally in SOLR/Lucene. An index could get wiped or some records could get deleted and merged away without you knowing it and backups can be useful for recovering those records/docs at a later time if you need to perform an investigation.
Before you re-route traffic to the second instance I would run some queries to load caches and also to test and confirm the current index works before it goes online.
Isolate the updates to one location and process and thread to ensure transactional integrity in the event of a cutover as it could be difficult to manage consistency as SOLR does not use a vector clock to synchronize updates like some databases. I personally would keep a copy of all updates in order separately from SOLR in some other store just in case a small time window needs to be repeated.
In general, my experience with SOLR has been excellent as long as you are not using cutting edge features and plugins. I have one instance that currently has 40 million docs and an uptime of well over a year with no issues. That doesn't mean you wont have issues but gives you an idea of how stable it could be.
I hardly know anything about Solr, so I don't know the answers to some of the questions that need to be considered with this sort of setup, but I can provide some things for consideration. You will have to consider what sorts of failures you want to protect against and why and make your decision based on that. There is, after all, no perfect system.
Both instances are using the same files. If the files become corrupt or unavailable for some reason (hardware fault, software bug), the second instance is going to fail the same as the first.
On a similar note, are the files stored and accessed in such a way that they are always valid when the inactive instance reads them? Will the inactive instance try to read the files when the active instance is writing them? What would happen if it does? If the active instance is interrupted while writing the index files (power failure, network outage, disk full), what will happen when the inactive instance tries to load them? The same questions apply in reverse if the 'inactive' instance is going to be writing to the files (which isn't particularly unlikely if it wasn't designed with this use in mind; it might for example update some sort of idle statistic).
Also, reloading the indices sounds like it could be a rather time-consuming operation, and service will not be available while it is happening.
If the active instance needs to complete an orderly shutdown before the inactive instance loads the indices (perhaps due to file validity problems mentioned above), this could also be time-consuming and cause unavailability. If the active instance can't complete an orderly shutdown, you're gonna have a bad time.

Resources