Whilst I've heard of some quantum emmulators I don't know if I can recreate Quantum interferences (i.e. wave interferences) using them.
Quantum mechanics may be expensive to simulate, but it is still computable. So: yes, you can reproduce quantum interference with a classical simulator.
Related
We're developing a multithreaded project. My colleague said that gprof works perfectly with no work around with multithreaded programs. I read otherwise some time ago.
http://sam.zoy.org/writings/programming/gprof.html
http://lists.gnu.org/archive/html/bug-binutils/2010-05/msg00029.html
I also read this:
How to profile multi-threaded C++ application on Linux?
So I'm guessing the workaround is no longer needed? If so, since when is it not needed?
Unless you change the processing the gprof would work fine.
Changing the processing means using co-processor or gpus as computing units. In the worst case you have to manually call the setitimer function for every thread. But as per latest version, (2013-14) it's not needed.
In certain cases it behaves mischievously. So I advice to use the VTUNE from Intel which would give more accurate and more detailed information.
I'm doing some computer hardware architectural explorations and I was eager to test different tasks on my prototype. so I need some code to simulate the task of video encoding and/or decoding (H264 would be perfect but other codecs are also ok).
Is there anything that I can use? It doesn't have to be exactly encoding/decoding, just some code that can roughly estimate the same workload with same kind of computations so I can get some performance/power consumption results.
Oh yeah and it's gotta be in "pure C", and without using any sophisticated libraries (math.h is fine) since I'm gonna have to put that onto a hardware module.
Thanks in advance
Have a look at libavcodec. It is pure C.
I've used code from CUDA C Best Practices to implement an execution timer. However their is something strange and I don't know if it's an anomaly or if that's normal. I get different read outs each time I run my CUDA app.
Could these readings by related to design or is that something I should expect.
I'm not running any graphic intensive applications on my machine, other than Windows 7.
Well it depends how big the differences are. One thing you can see anomalies caused by is the kernel scheduler. It may just happen that the scheduler is giving some extra timeslices to kernel functions (because graphics API calls have error checking involved) which shows more execution time. If the differences are very large I would say check your code but if it's very low in orders of milliseconds I wouldn't worry about it +- 10msecs is the usual for the timeslicing quantum in most OS's (windows probably included).
Also Aero is kind of intensive so that may be adding to the discrepancies you are seeing.
I've used code from CUDA C Best Practices to implement an execution timer.
Yeah, well, that's not a "best practice" in my experience.
I suggest using the nvprof profiler instead for your device-side code and CUDA Runtime API calls (it also works relatively well, I think, for your own host-side code). It'll take you a bit of hassle to set up and figure out which options you want to use, but it's worth it.
I'm working on a small application and thinking about integrating BLAST or other local alignment searches into my application. My searching has only brought up programs, which need to be installed and called as an external program.
Is there a way short of me implementing it from scratch? Any pre-made library perhaps?
Does it have to be in C, or would C++ also be OK? If so, you might want to look at the SeqAn library here.
This is a topic which has also to do with reproducibility of results: it is always better to use the raw blast binary provided by NCBI or UCSC, because it will make your results easeir to reproduce by other scientists and will save you a lot of time spent on writing tests (more time than you can imagine).
For the day-to-day work I have often used exonerate, a tool written in C which can do both global and local alignment, has a simple unix-like interface, and doesn't require to format your input as with blast.
Moreover, take in mind that people usually use a combination of makefiles and scripts to define a pipeline, instead of calling everything from a script: most programming languages are not good to define pipelines, while automated build tools like Make are not useful for scripting tasks. Have a look at these examples: http://skam.sourceforge.net/skam-intro.html http://swc.scipy.org/lec/build.html
I just stumbled across the thing I would have wanted: The NCBI C++ Toolkit. Thanks for all the suggestions though.
The BLAST algorithm was implemented ~20 years ago, it is now a very big algorithm and I cannot imagine it can be easily implemented from scratch. You can try to learn about it when looking at the sources of the 'blastall' program in the NCBI toolkit.
A simpler pairwise algorithm (Swith Waterman, Needleman-Wunsch )should be easier to implement:
Computational Molecular Biology: An Introduction has code for Smith-Waterman and other dynamic programming alignment algorithms.
I use NetBLAST through the blastcl3 client binary. I believe that the blastcl3 binary is a pretty thin client for the NetBLAST web service.
If so, it shouldn't be too hard to sniff the packets and implement your own client. Depending on your use case, this might be faster/easier than implementing your own alignment algorithm. It does, however, introduce a dependency to NCBI's web services.
http://www.ncbi.nlm.nih.gov/staff/tao/URLAPI/netblast.html
I posted a similar question (running BLAST (bl2seq) without creating sequence files)
Basically, the answer I came up with was running this command:
bl2seq -i<(echo sequence1) -j(echo sequence2) -p blastn
That pipes the result of the echo command to the bl2seq (blast 2 sequences) program.
But I couldn't get it to work via calling system from Python
I'm thinking of something smaller than a laptop that i can spend my hours on the way to work doing project euler problems or such.
Any ideas?
If you mean a programming platform, you could get a netbook like the ASUS EEE.
Or if you meant smallest programmable device, check out a PIC microcontroller:
http://en.wikipedia.org/wiki/PIC_microcontroller
This may sound crazy but try pen/pencil and paper. No you can't run the code but it'll help you to not use online references so much (yes they are good but memory skills help us all) and it'll probably also help you plan your code better.
I've programmed directly on my HP 48G series calculator.
There's a good programming tutorial for it here. I'll have to dust it off and see if it will pass Project Euler's one-minute rule.
If you are looking for a microcontroller or similar my advice to you would be to check out either an AVR, PIC, Arduino, or BeagleBoard.
All are relatively cheap and easy to program (the first three more so). AVR's and PIC's are types of microcontrollers that you can program with C or ASM, however you will need some type of prototyping board or similar to achieve anything. An Arduino is an AVR chip sitting on a board, so it is much easier to achieve something in a small amount of time. In addition to this they are quite popular and you can find many projects that have been done at Hackaday. Lastly BeagleBoard is a much gruntier board that will run embedded linux.
My recommendation is for the Arduino.
There are many more suggestions here.
However, If you are looking for a small laptop device to program you have plenty of options. An Asus EEE pc, HP 2133 (I believe thats the correct model), MSI Wind, MacBook Air etc etc. As other people have suggested check out some netbooks. There are also various PDA's or mobile phones that you could program, such as an Android phone or an OpenMoko phone. There are plenty of options, I suggest you find out what size you are looking for specifically and that will narrow down your choices.
Good Luck.
I'll take the reputation hit to say this: why not read a book or watch the scenery go by? Trying to cram more programming into your day isn't actually good for you, and may even make you less productive.
I have used SmallBASIC on my Palm OS 5 device for a while now, and it seems to work well with most of the problems I throw at it.
How about using a Palm with the OnboardC compiler?
A netbook would be ideal.
A graphing calculator might be too limited for programming.
If you're talking about doing a microcontroller, there are several models of arduino boards that are very easy for someone not familiar with embedded programming.
I have a Nokia E51 with python interpreter. It's not pleasant to type with a numeric keypad at all. I think it is as small as you can get.
I have a Samsung i760 running Windows Mobile 6. The slide-out keyboard is fantastic (best mini-keyboard on any device) - I can type on it almost as fast as a normal keyboard. I mainly use it to write Oracle Lite queries in mSQL, which is borderline unusable with any other PDA keyboard.
This question led me to wonder about real programming environments for this device, so I asked another question, and one of the answers was a link to this, which is a Windows Mobile IDE for creating .NET Windows Mobile applications. You write them in C#, even.
Netbooks are smaller than your typical laptop and have plenty of power.
It's pretty subjective. I code on my commute using a 15.4" laptop and I find it quite limiting.
I could still work at 13", but the limitations would be getting so large I'd already be questioning if it's worth it. Anything smaller would be right out.
But then I tend to work with lots of windows open. Multiple editors, docs, browsers etc. Cutting back on that eats into my productivity. At home I have a 30" display. At work I have 2x 24" displays.
If you tend to work mostly in one window, rarely consult docs and other apps etc, you could probably go smaller.
It depends so much on the type of person you are, what you are comfortable with, the way you work, what you are working in and with... the list goes on.
My guess is that for most developers 13" is going to be the smallest before it gets so frustrating that you're better off just listening to podcasts or something - but YMMV - and will!