Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
So, I am trying to create a database that can store thousands of malware binary files with sizes ranging anywhere from kb's to 50 mb. I am currently testing with cassandra using blobs but of course with files that big cassandra isn't handling it that well. Does anyone have any good ideas maybe for a better database or maybe a better way to go about using cassandra. I am relatively new to databases so please be as detailed as possible.
Thank You
If you have your heart set on cassandra you would want to store the blob files outside cassandra as the large file sizes will cause problems with your compaction and repairs. Ideally you would store the blob files on a network store somewhere outside cassandra. That said apparently walmart did do it previously
Cassandra setup:
CREATE TABLE [IF NOT EXISTS] malware_table (
malware_hash varchar,
filepath varchar,
date_found timestamp,
object blob,
other columns...
PRIMARY KEY (malware_hash, filepath)
What we're doing here is creating a composite key based on the malware hash. So you can do SELECT * FROM malware_table WHERE malware_hash = ?. If there was a collision you have two files to look at. Additionally this lookup will be super fast as its a key value lookup. Keep in mind with cassandra you can only query by your primary key.
As its not likely that you're going to be updating files in the past you're going to want to run Size based compaction. For faster lookups in the long run. This will be more expensive on harddrive space since you'll need to have 50% of your harddrives free at any given time.
Alternative solution:
I would probably store this in s3/gcs or some network store. Create a folder to represent the hash of the folder and then store the files inside each folder. Use the api to determine if the file is there. If this is something being hit 1000s of times a second you would want to create a caching layer infront of it to reduce lookup times. The cost of a object store is going to be VASTLY cheaper than a cassandra cluster and will likely scale better.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Problem
Every day we recieve a new set of data files from our backoffice application. This application is not able to produce an incremental changeset so all it can do is dump to a large file.
Currently every morning we drop our old MySQL tables and load the data into uor database.
One of the problems we have here is that we are unable to act on specific changes in the data and also we are using CQRS and would have quite some benefits here if we had an incremental list.
File format is currently CSV
Data size per file is up to 10GB
Number of rows per file is up to 40 million
Approximately 30 data files
On average less than 1% of rows is changed each day
Most files either have no primary key or a combined primary key. For many, the full row is the only thing that makes them unique.
The order of data is not fixed. Rows may switch positions
Desired situation
When we receive the new data we calculate the difference and push a message into Kafka for each changed (if a rowidentifier exists), added or removed row.
Technology
We use AWS and are able to use all technologies AWS offers
We are not limited to a certain amount of hardware. We can just start up some new servers in AWS
Cost is only a very limited factor. We have quite a large budget and the ability to have an incremental set offers us quite a lot of value.
We have a running Kubernetes cluster
Question
So the main question is, What would be the best way to compare these 2 large files and create an incremental set? We need it to be fast, preferably within the hour or close to that.
Are there database types that have this natively or are there technologies that can do this for us?
"...The order of data is not fixed. Rows may switch positions..." That is the one that makes it hard. If the rows did not change a git diff or text file comparison tool would work.
Spitballing here but:
For each row create a SHA hash
Use the hash as a unique ID
Store each UNIQUE hash and associated data into a DB Table.
Post processing the file, dump the table into a text file (CSV/SQL/etc)
Commit file changes to source control
When you receive a new data set, check if the hash exists
If no: append the hash to the end of the table
If yes: ignore
Dump the table into a text file (CSV/SQL/etc)
'git diff' commits to see change sets.
Might be able to do this with AWS Glue...
Bonus:
To make it even easier create a location the back office app can upload the file and create a cron to process the report at a given time
This process is a typical ETL (Extract-Transform-Load) task. You are extracting data from one source/format, changing it, and loading/inserting it into a different source/format.
Let me know if any of this was helpful.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
What database software can do these?
Scalability via data partitioning, such as consistent hash.
Redundancy for fail over. Both memory cache and disk storage together. Key-value data. Value is document type such as JSON.
Prefers A and P in CAP theory.
I heard that MemcacheD can do these all, but I am not sure.
Here's details:
Data storage volume is, <30KB JSON document for each key. Keys shall be >100,000,000.
Data is accessed >10K times for a second.
Persistence is needed for every key-value data.
No need for transaction.
Development environment is C#, but other languages are ok if the protocol spec is known.
Map reduce is not needed.
This is too short a spec description to choose a database. There are tons of other contraints to consider (data storage volume, data transfer volume, persistence requirement, needs for transactions, development environnment, map reduce, etc.).
That being said:
Memcachedor Redis are memory database which means that you cannot store more than what your computer memory can hold. This is less true now that distributed capabilities have been added to redis.
Document database (such as MongoDB or Microsoft Document Db) support everything. And you can add memcached or redis in front. That's how most people use them.
I would like to add that any SQL can now deal with JSON. So that works too. With a cache up front if needed.
Some link of interest for JSON oriented database. But once again. That's too short a spec to choose a database.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Summary
I am facing the task of building a searchable database of about 30 million images (of different sizes) associated with their metadata. I have no real experience with databases so far.
Requirements
There will be only a few users, the database will be almost read-only, (if things get written then by a controlled automatic process), downtime for maintenance should be no big issue. We will probably perform more or less complex queries on the metadata.
My Thoughts
My current idea is to save the images in a folder structure and build a relational database on the side that contains the metadata as well as links to the images themselves. I have read about document based databases. I am sure they are reliable, but probably the images would only be accessible through a database query, is that true? In that case I am worried that future users of the data might be faced with the problem of learning how to query the database before actually getting things done.
Question
What database could/should I use?
Storing big fields that are not used in queries outside the "lookup table" is recommended for certain database systems, so it does not seem unusual to store the 30m images in the file system.
As to "which database", that depends on the frameworks you intend to work with, how complicated your queries usually are, and what resources you have available.
I had some complicated queries run for minutes on MySQL that were done in seconds on PostgreSQL and vice versa. Didn't do the tests with SQL Server, which is the third RDBMS that I have readily available.
One thing I can tell you: Whatever you can do in the DB, do it in the DB. You won't even nearly get the same performance if you pull all the data from the database and then do the matching in the framework code.
A second thing I can tell you: Indexes, indexes, indexes!
It doesn't sound like the data is very relational so a non-relational DBMS like MongoDB might be the way to go. With any DBMS you will have to use queries to get information from it. However, if your worried about future users, you could put a software layer between the user and DB that makes querying easier.
Storing images in the filesystem and metadata in the DB is a much better idea than storing large Blobs in the DB (IMHO). I would also note that the filesystem performance will be better if you have many folders and subfolders rather than 30M images in one big folder (citation needed)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
We have a service that currently runs on top of a MySQL database and uses JBoss to run the application. The rate of the database growth is accelerating and I am looking to change the setup to improve scaling. The issue is not in a large number of rows nor (yet) a particularly high volume of queries but rather in the large number of BLOBs stored in the db. Particularly the time it takes to create or restore a backup (we use mysqldump and Percona Xtrabackup ) is a concern as well as the fact that we will need to scale horizontally to keep expanding the disk space in the future. At the moment the db size is around 500GB.
The kind of arrangement that I figure would work well for our future needs is a hybrid database that uses both MySQL and some key-value database. The latter would only store the BLOBs. The meta data as well as data for user management and business logic of the application would remain in the MySQL db and benefit from structured tables and full consistency. The application itself would handle the issue of consistency between the databases.
The question is which database to use? There are lots of NoSQL databases to choose from. Here are some points on what qualities I am looking for:
Distributed over multiple nodes, which are flexible to add or remove.
Redundancy of storage, with the database automatically making sure each value object is stored on at least two different nodes.
Value objects' size could range from a few dozen bytes to around 100MB.
The database is accessed from a java EJB application on top of JBoss as well as a program written in C++ that processes the data in the db. Some sort of connector for each would be needed.
No need for structure for the data. A single string or even just a large integer would suffice for the key, pure byte array for the value.
No updates for the value objects are needed, only inserts and deletes. If a particular object is made obsolete by a new object that fulfills the same role, the old object is deleted and a new object with a new key is inserted.
Having looked around a bit, Riak sounds good except for its problems with storing large value objects. Which database would you recommend?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have around 10 tables containing millions of rows. Now I want to archive 40% of data due to size and performance problem.
What would be best way to archive the old data and let the web application run? And in the near future if I need to show up the old data along with existing.
Thanks in advance.
There is no single solution for any case. It depends much on your data structure and application requirements. Most general cases seemed to be as follows:
If your application can't be redesigned and instant access is required to all your data, you need to use more powerful hardware/software solution.
If your application can't be redesigned but some of your data could be count as obsolete because it's requested relatively rearely you can split data and configure two applications to access different data.
If your application can't be redesigned but some of your data could be count as insensitive and could be minimized (consolidated, packed, etc.) you can perform some data transformation as well as keeping full data in another place for special requests.
If it's possible to redesign your application there are many ways to solve the problem.In general you will implement some kind of archive subsystem and in general it's complex problem especially if not only your data changes in time but data structure changes too.
If it's possible to redesign your application you can optimize you data structure using new supporting tables, indexes and other database objects and algorythms.
Create archive database if possible maintain different archive server because this data wont be much necessary but still need to be archived for future purposes, hence this reduces load on server and space.
Move all the table's data to that location. Later You can retrieve back in number of ways:
Changing the path of application
or updating live table with archive table