Whether the amount of data matters in data analytics? [closed] - dataset

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
So i want to know whether the data analytics can be done by using a little bit amount of data like 100 to 1000 records stored in a database.If i do so then is it called data analytics?
Somebody saying that it is not at all called data analytics if you are analysing a small amount of data.
So i am confusion the data analytics with big data. So can anyone answer me for this?
My big thanks in advance

If you are analyzing data to discover information to aid decision making, It will be called data analytics irrespective of your size of the data. However, yours might not be called as big data analytics. That would have been what your friend/colleague meant.

Related

Doubts on storing CPU,RAM and DISK usage values [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have few doubts on database storage techniques:
How to store CPU usage activity to consider it for later use?
How to store RAM usage variation for a certain amount of time?
Similarly, how to store Disk usage?
All these data will be later used for ANOVA test.
I am trying to get these values from a c# application which will be monitoring the activities of a system for a certain amount of time.
A much better idea is to use the Performance Manager built into Windows (perfmon.exe). You can set it to record many performance items including the three you mention (CPU and RAM by program as well as in total). There is also a free analyser called PAL at Codeplex which can help you set the recording and then analyse it for you.

Is it bad practice to duplicate data across different flux stores? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Say you want each feature to have it's own store for modularity, but multiple features may need the same data x. Is it bad practice to hold x in Feature1Store, Feature2Store, etc?
Yes if the store represents the state of the data. No if the store represents the state of the component. A well excepted rule about data is that you only want one instance of that data. Otherwise you will have consistency issues. But if the data is not changing and you only are providing sorting, filtering or some other component level state multiple instances are ok.

Benefits of Using Datasets [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I understand that datasets contain datatables and that they can house the relationships between those datatables. I am making a simple form that gets data from SQL Server by way of stored procedures that return the records I need subject to certain parameters. It is not strictly necessary that I model relationships between the datables. Are there other benefits of using a dataset to contain them, or am I just as well off to leave them free standing?
For example, you can automatically perform actions on related tables (autodelete). Or you can add restrictions which will depend on such relations.
And it will help you to draw a database diagram.

Desining a database for word frequency and text analysis [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have a bunch of articles, on which I want to do word frequency and trend analysis.
The articles are tagged with date, author, theme and subject. I want to use these tags to slice the data so that I can get the most common words used for a specific author (or group of authors), theme(s) or subject(s). Overall and over time (trend).
How would I design this database (relational or other) or should I create a data cube?
Rizzoma.com made this with couchDB (noSQL) and Sphinx (fulltext search engine).
You can try to make it in another way, if you want, or test existing solution and repeat it.

What are the best ways to show live data in Silverlight? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I'm using a stream engine that updates my database every second with significant set of data.
I would suggest looking into using a WCF Duplex Service. I found several articles about implementing it searching for "Silverlight WCF Duplex Service." You should also consider bring back chunks of your data instead of the whole set each time if the data set is significantly large as you mentioned.

Resources