It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Any help would be most welcome...
Thank you ^^
Simulation is the imitation of the operation of a real-world process or system over time.1 The act of simulating something first requires that a model be developed; this model represents the key characteristics or behaviors of the selected physical or abstract system or process. The model represents the system itself, whereas the simulation represents the operation of the system over time.
Benchmarking is the process of comparing one's business processes and performance metrics to industry bests or best practices from other industries. Dimensions typically measured are quality, time and cost. In the process of benchmarking, management identifies the best firms in their industry, or in another industry where similar processes exist, and compare the results and processes of those studied (the "targets") to one's own results and processes. In this way, they learn how well the targets perform and, more importantly, the business processes that explain why these firms are successful.
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
For a few years now I have been working on a system that currently stores its data in a database. It has quite a high demand on it with millions of transactions.
There is no need to do this but purely for fun I have been for a long time now wondering just how fast I could make it if I wrote the whole thing in c, and writing/reading directly from disk. I know this is a little crazy.
All of the data fits in memory, so the biggest issue is going to be somehow storing a transaction log that can be replayed if the system crashes.
I am wondering what people with more experience in C than I think about this.
If i understand he question correctly, I can see two options:
You could look at something like SQLite, which gives both the "written in C" and fast execution parts, in addition to handling your storage to disk. It is a file-based database and is very fast and resilient against system/program crashes.
You could log all your data to disk while storing the live copy, but if you store it as SQL transactions, it is going to be larger than the equivalent raw data. In this case you have a trade-off in that something like SQLite will likely have more processing overhead than your hand coded RAM storage method, but may have less to write to disk due to its raw (non-SQL) storage.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I am a build engineer in my current position but I dabble in applying AI techniques to improve our capabilities. What I am interested in is how your teams use AI Techniques (Pattern Recognition, Machine Learning, Bayesian Classification, or Neural Networks) in real life. I am looking for ideas on other ways of improving our processes and fun projects to start.
Examples of things I have tried:
Naive bayesian classifier for automatically assigning class labels (misdemeanor, felony, traffic violation) to free form text entered by court reporters.
Generated team schedule for the year using a genetic algorithm using a fitness function that assigned demerits for any scheduling conflicts, over / under allocation of team members, holidays, personal time off.
Binary associative memory for quickly querying environment configuration information for all applications deployed to all environments; including URLs, ports, source control location, environment, server, os, etc
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
By what mechanism does LinkedIn tell so fast, on group page, the connections of user who are also a part of that group?
I don't have any inside knowledge of Linkedin source code but I do know that they are the founding contributors of the nosql distributed key-value database Project Voldermort http://project-voldemort.com/. These kinds of systems are architected to return queries blazingly fast. They are often ideal for write seldom/read often scenarios and tend to sacrifice consistency for high scalability and availability.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
I am reading the C programing guide and all of a sudden it starts talking about pipes. Can someone simply tell me what a pipe is.
They are OS objects appearing as file descriptors in different processes, allowing output of one to be the input of the other. Read here.
You want to read Beej's IPC Guide, specifically the pipe section.
There is no form of IPC that is simpler than pipes. Implemented on every flavor of Unix, pipe() and fork() make up the functionality behind the "|" in "ls | more". They are marginally useful for cool things, but are a good way to learn about basic methods of IPC.
Also check the other guides at http://beej.us/guide/.
Most likely, this means a Pipeline as in the context of Unix-like operating systems, see Pipeline (Unix) in Wikpedia. It is a chain of processes with the output of one process being the input to the next one.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I want to know that does Apply transaction in stored procedure slow the execution of query?
if yes the why?
i want to know the actual internal processing of sql server when we apply transaction on query?
Consider that there are different types of Transaction within SQL Server and that the default setting for the Database Engine is "Autocommit Transaction", that is to say that each individual Transact-SQL statement is committed when it completes. You do not have to specify statements to control transactions, unless you wish to explicitly manage them with more refined control.
See: Controlling Transactions (Database Engine)
Are you perhaps therefore asking if there is any additional overhead when explicitly controlling transactions?
The short answer is yes, as to what exactly is that overhead, well it depends. It depends on multiple factors such as the method used, i.e. Transactions managed through an API or direct via T-SQL, as well as the performance of your specific hardware.
On the performance front, I guess that there will be a slight performance degradtion when using transactions, but that is negligible.
For the transaction processing, got two links below. Hope they will help you -
http://www.informit.com/articles/article.aspx?p=26657
http://sqlserverpedia.com/wiki/Database-Transaction