Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 months ago.
Improve this question
The Higgs training runs for LightGBM take the same amount of time for me on both GPU and CPU - 26 seconds. Logs confirm that GPU run is using GPU (transferring data to GPU etc.)
https://lightgbm.readthedocs.io/en/latest/GPU-Tutorial.html
Went through tutorial to install LightGBM for GPU. Installed fine and was able to run GPU training to confirm.
GPU and CPU specs below for comparison.
Processor (Skylake X; Latest Generation)
8-Core 3.60 GHz Intel Core i7-7820X
VS
NVIDIA RTX 2080 8 GB
Note: accepted all configuration defaults as per tutorial as this is the benchmark. Can play around with them but that might be defeating the point.
Anyone tried similar on RTX?
Performance benchmarks https://lightgbm.readthedocs.io/en/latest/GPU-Performance.html are using GTX but indicate that any recent NVIDIA card should indeed work. The benchmark page warns against Kepler cards. But RTX 2080 is Turing and seems to support hardware atomic operations. https://en.wikipedia.org/wiki/CUDA My limited knowledge suggests it should all be good.
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 days ago.
Improve this question
Hello All,
I have Linux VM has 8 Gb ram.This's the picture taken from HTOP includes all system information(tasks,services).
There's no application using too much Ram when look at PID's.But total ram usage is 6.26 GB.Why virtualized part of these tasks are too high?
E.x : mssql-server (5711 M VIRT , 511M RES)
I already have configured mssql server while setting max.memorylimitmb to 3072 mb.
But still to much ram using in my server.
Anyone can help me ? How can I fix it ?
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have a query that, is there any simulation software available (like simulink in matlab) for micro controllers.
I have tested my system on simulink and generated a c code for it.Now I want to verify the code by running it on a micro controller simulator which also have such scopes etc(so i can easily verify my embedded code).
Xcos is a simulink analogous in Scilab very helpful for most who can't afford a matlab license. The Arduino library will help you making whatever project you want.
You can download it free here https://www.scilab.org/
Now I want to verify the code by running it on a micro controller simulator which also have such scopes et (so i can easily verify my embedded code)
You might consider a static program analysis approach. Tools like Frama-C or Coverity or Clang analyzer come to mind. But they are difficult to learn or use. If your code is compilable by some recent GCC-based cross-compiler, consider also developing your GCC plugin.
For some microcontrollers, you might find software emulators for them (e.g. sourceforge lists several of them). Qemu could be a possibility (but again, there is a learning curve).
See also this report....
Take into account your time (including learning efforts). You might decide that buying an arduino (or a RaspBerryPi) is cheaper than learning to use emulators for it. For example GCC is free software, but you'll need months of efforts to dive into its source code (many millions lines of code). Scilab is also free software. So is GHDL.
If you have access to the entire documentation of your micro-controller (including cycle times of every machine instruction), writing the emulator is also a possibility. Again, budget months of efforts. JIT compilation libraries such as libgccjit or libjit could be interesting to use for that. Or higher-level languages (e.g. SBCL).
Be also aware of Rice's theorem, C compilers like CompCert, and processor architectures like RISC-V or simulator frameworks like UniSIM
Notice that automatically assisted verification of C code reduce program bugs, but not software design bugs. You basically move bugs from programs to their specifications (as described by the V-model of software development). There is No Silver Bullet
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I was making a little device that would have three buttons (like the ones at radioshack) and each preform its own action. These buttons and their actions would be controlled by a very small real time operating system that I would put on this device.
Would I need an ARM Processor in any way?
How would I put the real time operating system on the device?
What OS would I have to compile this on (ex. Ubuntu? Mac OS X? Windows 7?)?
Are there any examples of anyone doing this?
P.S. No prebuilt boards (ex. arduino). I would build the board myself.
Any feedback would be greatly appreciated!
Even if you don't want to use a prebuilt board in the finished product, I'd recommend getting a prebuilt board (like the Arduino), build your product, program it, test it, etc. while on the breadboard, and then simply rebuild it however you want, using the same hardware as you've been using.
That helps you out especially the next time you're building something, because you already have the prototype board and the toolchain ready to go.
Compiling your files can be done on any OS.
Enumerated version:
No, and I wouldn't even recommend using an ARM processor; but rather an Atmega328 or similar.
Using a programmer.
Any.
Probably millions, or at least hundreds of thousands of examples, yes.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am using Maya 2012 to create a wall made up of bricks (polyCubes). When I playback the scene, Maya takes so long to calculate gravity, making my frame rate as low as 0.3 fps. Are there some settings that I overlooked, or is Maya dynamics inherently slow?
Also the bricks behave weird. They keep twitching and sliding on each other, as if they were soaps, even when I set friction to 1. I wonder why they can't reach an equilibrium or a stable state?
My computer: Intel Core 2 Duo T8100 2.1GHz, 3 GB RAM, NVIDIA GeForce 8400M GS, Windows xp sp3
Maya's RBD engine is ANCIENT. In fact, it should still be the EXACT SAME from version 4.0 or something similar.
Up to M2011 the only way to have decent rigid body dynamics was to use nCloth with rigid settings to emulate a rigid behaviour. That way is not an optimized workflow though, and with lots of bodies it will slow down everything to death and probably crashing. So prior to 2012 the best solution was resorting to a 3rd party dynamics engine plugin, like Bullet (there's a free open source version of Disney's implementation called Dynamica that you can find online)
If I'm not mistaken, with Maya 2012 they included DMM (Digital Molecular Matter), which was one of the aforementioned 3rd party plugins. I haven't tested it yet since we're still using 2011 in production (and we're using Houdini to do our FX stuff), but you should be able to load it from the plugin manager. Then just check the docs for usage instructions.
Hope it helped.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I would like to learn how to write device drivers because I think it would be fun. I use a Mac OS X Macbook, but I also have an Ubuntu machine (running on a Mac Min). I am pretty familiar with C and currently am reading this book. I have found some links online such as Mac Dev Center. I am doing this because it would be fun. I think there would be real gratification to see hardware operate because of software I wrote.
I guess what I would like is some tips or advice and guidance, and does anyone know of a list of devices that don't have drivers or can I write a driver for something that's already supported (would prefer the former so I'm actually providing value). What's a good device to get started with? Am I biting off more than I can chew? I'm not afraid of low level programming or assembly or whatever amount of effort is required. I'd like a challenge really!
For Linux, you might look into picking up the O'Reilly Linux Device Drivers book or reading PDFs online. In my opinion, it is one of the better texts around on the subject.
The Linux Kernel Module Programming Guide is another good resource.
You may also want to pick up a book specifically on the Linux Kernel. I picked up a copy of Robert Love's Linux Kernel Development (2nd Edition) for this purpose (3rd Edition on the way).
Writing a device driver can be pretty simple, or it can be almost arbitrarily complicated. For instance, I've been involved in a project where it took six of us almost three years to solve ONE bug in a device driver. Of course, we cleared out dozens of other bugs while looking for it... the code improved immensely. The fix turned out to be an eight line patch, that cost, conservatively, about a million dollars.
But, as a side project to that, I wrote an ethernet driver from the chip data sheet in a week, and took another week to debug it. Haven't needed to touch it since.
There's no way to say in general how much work a driver will be; a GPU driver could cost hundreds of millions, a driver for a single LED costs a couple of hours work at the most.
If you want to go for Linux device driver development, the freely available O'Reilly book Linux Device Drivers, Third Edition is a must read.
In order to find unsupported hardware pieces for which you could write a driver, ask on the Linux mailing lists. Maybe some USB 3.0 device? ;)
For Mac you might want to take a look at Mac OS X Internals book. It's think and heavy but fun to read. It is mostly about PowerPC-based Macs but has an appendix about Intel-based ones. For Linux take a look at Linux Device Drivers, 3rd Edition - it's lighter (free PDFs online :) and is really device driver-oriented, might be a better start.