Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have few doubts on database storage techniques:
How to store CPU usage activity to consider it for later use?
How to store RAM usage variation for a certain amount of time?
Similarly, how to store Disk usage?
All these data will be later used for ANOVA test.
I am trying to get these values from a c# application which will be monitoring the activities of a system for a certain amount of time.
A much better idea is to use the Performance Manager built into Windows (perfmon.exe). You can set it to record many performance items including the three you mention (CPU and RAM by program as well as in total). There is also a free analyser called PAL at Codeplex which can help you set the recording and then analyse it for you.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm currently developing an embedded controller, which will be connected to a potentially hostile environment. Since the Controller is quite limited (~50 MHz, ~16 KiB RAM), I do not have the luxury of an operating system which can help me with memory protection.
What is considered best practices for securing an embedded device? I know of techniques like stack guards, but since I'm not familiar with embedded development, I'm looking for some kind of guidance.
Edit: I'm using an ATSAMD21G18, which does not have an MMU. It's the same as used on many Arduinos. The controller will be conntected to a public bus (as in wiring, not the transportation method) thus I cannot assume anything about the behaviour of other bus members.
I am however not trying to protect IP, e.g. I'm not worried about somebody figuring out the contents of my controller. It's more about application security, e.g. how do I limit the harm done by somebody trying to take over my controller by exploiting e.g. buffer overruns.
Automotive MCU:s typically have a "copy cat" protection which blocks any form of debugger access - you can't read anything out of the MCU or debug it while this is active, you have to erase everything.
Check out MCU:s by silicon vendors with a lot to automative customers, such as NXP/Freescale or Renesas.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
For a small start-up mobile app/website what options are there for storing its data? I.e. Physical server or cloud hosted data base such as azure.
Any other options or insight would be helpful thank you!
Edit:
For some background I'm looking at something that users could regularly upload data to and consumers could query to find results through an app or website.
I guess it depends on your work load and also on the your choice of data store. Generally, SQL based storage are costlier on cloud based solution due to the fact that those can be only vertically upgraded whereas no-sql ones are cheaper.
So according to me you should first decide on your choice of data-store, which depends on following factors:
The type of data; is your data structured or it falls under non-structured category?
Operations that you will perform on the data. Do you have any transactional use-cases?
Write/Read pattern; is it a read heavy use case or a write heavy one ?
These factors should help you decide on an appropriate data-store. Each database has its own set of advantages and disadvantages. The trick is to choose one based on your use cases and above mentioned factors.
Hope it helps.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
My employer runs a Hadoop cluster, and as our data is rarely larger than 1GB, I have found that Hadoop is rarely needed to meet the needs of our office (this isn't big data), but my employer seems to want to be able to say we're using our Hadoop cluster, so we're actively seeking out data that needs analysis using our big fancy tool.
I've seen some reports saying that anything less than 5tb shouldn't utilize hadoop. What's the magic size where Hadoop becomes a practical solution to data analysis?
There isn't something like magic size. Hadoop is not only about the amount of data, it include resources and processing "cost". It's not the same process one image that could require a lot of memory and CPU than parse a text file. And haoop is being used for both.
To justify the use of hadoop you need to answer the follow questions:
Is your process able to run in one machine and complete the work on time ?
How fast is your data growing?
It's not the same read one time by day 5TB to generate a report than to read 1GB ten times by second from a customer facing API. But if you haven't facing these kind of problems before, very probably that you don't require use hadoop to process your 1GB :)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm building an app, that will need to store approx 100-200GB of JSON data per month with ~20.000 write operations per minute.
Is there any service that won't require millions of dollars to store this data?
One option is to use Azure's HDInsight. You'd pay for the HDInsight servers in addition to the storage of the data. Of course your costs will keep climbing as you add more and more data, so some form of archive would make sense. How long do you have to keep data easily available?
HDInsight Pricing
Storage Pricing
I think you may be overestimating your data growth. I would start with either AWS or Azure and build my own datacenter if volume goes near the level you are talking about. Yes this involves some migration later on but its always good to grow by observation.
Thanks everybody. After all, I decided to go with Azure Table Storage.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Anybody used Neural Network approaches for clustering data? Particularly
ART Neural Network (Adaptive Resonance Theory) or
Kohonen self organizing maps
How are they as compared to k-means or any other distance based clustering Algorithms?
Self-organizing maps (SOMs) have some internal similarities with K-Means, but also important differences. A SOM actually maps your data from the original data space (usually high-dimensional) onto the map space (usually two-dimensional), while trying to preserve the original data densities and neighborhood relationships. It won't give you directly the clustering, but may help you to visually inspect the data and recognize clusters.
I know too little about ART nets.