Apache Zeppelin - loading (importing) code from a different note - apache-zeppelin

I have some common code I want to run with many different notes (with the same interpreter - python).
I want to write the code once and reuse it across notes (and clusters since my notes are stored in a shared location), but also be able to edit it if I need to, so I've been trying to either run the code from another note using the same interpreter and context and share back the results or load the code into a paragraph in the note I am currently working on.
The best direction I have looks like ZeppelinContext's run / runNote which are able to run a different note but will not wait for it to complete and so I can't tell if it finished running. Mostly I only define functions in this "common" note so runtime should be almost none, but sometimes it takes a couple seconds to begin executing the note, which I want to wait before moving on to the next paragraph.
Is there a way to wait for the other note to finish running before continuing in the current note? Or is there a way to do this other than these run functions?

Related

How to measure program.exe startup time?

I want to measure my program exe startup time it
is the time from the moment of click or [enter]
if started from console or any other way till
the moment when program startup is done and
its code can execute with all structures set up
(probably it means when first commands in main()
are executed [?])
I want to measure it programmaticaly from
inside of my code - so i think a way to do it
would need to read an exact time of firing
the program then substract this time from
the time of first line of codes execution
(sorry for my bad english) How to do it?
Do you want to measure from the click or [enter], which includes the shell (cmd.exe or Windows Explorer) overhead, or from program startup?
Programatically you can only measure time from process creation, so that won't include finding, reading and mapping the .exe file or any DLLs. The timing will vary depending on what is already mapped in virtual memory. It will include initialisation of the C RTL, but not much else.
Probably the best you can do is GetProcessTimes.
The problem is that even the parent process does not necessarily wait for the child process initialisation to complete -- it could, using the WaitForInputIdle, but if you are using standard tools like Windows Explorer then you stuck with that. I can't see any way to measure the shell overhead without writing your own.

Cocoa system() progress?

I am creating an application that must use the system(const char*) function to do some "heavy lifting", and I need to be able to give the user a rough progress percentage. For example, if the OS is moving files for you, it gives you a progress bar with the amount of data moved and the amount of data to move displayed on the window. I need something like that. How can this be done?
Edit: Basically, I give the user the option to save files in a compressed format. If they do so, it saves normally then runs this code:
char* command = (char*)[[NSString stringWithFormat:#"tar -jcvf %#.tar.bz2 %#", saveurl.path, filename] cStringUsingEncoding:NSUTF8StringEncoding];
system(command);
Sometimes this takes a little while (the app deals with video files), so I want to be able to give them an estimated completion time.
I am creating an application that must use the system(const char*)
function to do some "heavy lifting"
No, it doesn't have to use system() as such. In fact, it shouldn't. There are plenty of other APIs for running subprocesses, almost all of which will be better. In Cocoa, the most obvious better option is NSTask.
In any case, there's nothing that can tell how much progress a subprocess is making except that subprocess itself. If the program you're running doesn't provide a means for reporting progress, there's little hope. Nothing else can even divine what the purpose or goal of the subprocess is, let alone judge how far along it is to meeting that goal.
Even if the program does report progress, you'll need a means to receive that information. system() doesn't allow for that. NSTask does, as would popen() or manually forking and execing the program.
You would need a command line program that has a way of communicating progress information back to your application (or perhaps simply write progress info to a log file that you parse in your cocoa app). Are you sure you really need to do this?
For your edited example, you might consider just putting up some sort of spinner or hourglass type UI indicator to show them that the write is in progress, while allowing them to continue with other work. You can't predict archive creation time, especially when you add compression to it.

User CPU time of specific child process after first output to stdout

I'm working on a program which may spawn multiple child processes, and I need to get precise information about the CPU time used by each child process, even if there are several child processes running simultaneously. I'm doing this using wait4(2) on a separate thread of the parent process, which works quite well.
However, this approach provides the total time spent by a specific child process, and I'm only interested in the amount of time spent after a particular event, namely the child process' first output to stdout. I've looked into other ways of getting the CPU time of child processes, such as getrusage(2) and times(3), but these don't seem to be able to distinguish between multiple child processes' times, and instead provide the sum of all child processes' times.
I'm working on a text editor application that lets users run scripts and code in a variety of different languages, and the app has a built-in code timing feature. The app relies on bash scripts to run the users code, and the first thing my bash scripts do are to output a start-of-heading byte (0x02). After this the bash script does whatever it needs to do to run the users code, and that is the thing I want to time. Bash may do a bit of initialization (to set up PATH variables etc) which may take 30 or 40 ms to complete, and I don't want that initialization to be timed along with the rest. If the users code is for instance a simple Hello World type program in C, the timing feature might display something like 41 ms instead of the actual 1 ms which it took to run their code.
Any ideas on how this might be done?
Thanks :)
A couple of possible solutions come to mind. They don't get CPU time after first output exactly, but they may avoid the problem you're dealing with.
The first is to get rid of the bash scripts and just do the equivalent work in your program before running the user's code (between fork() and exec(), for example). That way the child process' CPU time from wait4() doesn't include your extra setup.
Another possibility is to write a simple application that does nothing but run the user's application and report its CPU time back to your main application. That runner application can then be called from your scripts to run the user's program, rather than calling the user's program directly. The runner application might itself use fork()/exec()/wait4() to run the user's program, and could report the information from wait4() to your main program through any of a variety of means such as a named pipe, message queue, socket, or even just writing the information to a file your main program can open afterward. That way your bash scripts can do work both before and after running the user's program that won't be included in the CPU time reported by the runner application. You'd probaby want the runner to accept an argument like the name of a pipe or an output file in addition to the user's program's path and arguments so that you can control how the information is reported -- that way you could run more than one instance of the runner application and still keep the information they report separate.
If you do want to include the work done by the script, but not the time taken to load bash, then you could signal the main program by echoing something to a pipe from the bash script before and after the parts you want to time. The main program can then measure the time between the start and stop signals, which will at least get you wall-clock time (though not actual CPU time). Otherwise I'm not sure there's a way to perfectly measure the CPU time for just part of the script without using a modified bash (which I'd avoid if possible).

How to speed up consecutive program startup under Linux?

I've written two relatively small programs using C. Both of them comunnicate with each other using textual data. Program A generates some problems from given input, B evaluates them and creates input for another iteration of A.
Here's a bash script that I currently use:
for i in {1..1000}
do
./A data > data2;
./B data2 > data;
done
The problem is that since what A and B do is not very time consuming, most of the time is spent (as I suppose) in starting apps up. When I measure time the script runs I get:
$ time ./bash.sh
real 0m10.304s
user 0m4.010s
sys 0m0.113s
So my main question is: is there any way to communicate data beetwen those two apps faster? I don't want to integrate them into one application, because I'm trying to build a toolset with independent, easly communicating tools (as was suggested in "The Art of Unix Programming" from which I'm learning the way to write reusable software).
PS. The data and data2 files contain sets of data needed in whole at once by those applications (so communicating by for e.g. one line of data at time is impossible).
Thanks for any suggestions.
cheers,
kajman
Can you create named pipe ?
mkfifo data1
mkfifo data2
./A data1 > data2 &
./B data2 > data1
If your application is reading and writing in a loop, this could work :)
If you used pipes to transfer the stdout of program A to the stdin of program B you would remove the need to write the file "data2" each loop.
./A data1 | ./B > data1
Program B would need to have the capability of using input from stdin rather than a specified file.
If you want to make a program run faster, you need to understand what is making the program run slowly. The field of computer science dedicated to measuring the performance of a running program is called profiling.
Once you discover which internal portion of your program is running slow, you can generally speed it up. How you go about speeding up that item depends heavily on what "the slow part" is doing and how it is "being done".
Several people have recommended pipes for moving the data directly from the output of one program into the input of another program. Assuming you rewrite your tools to handle input and output in a piped manner, this might improve performance. Again, it depends on what you are doing and how you are doing it.
For example, if your tool just fixes windows style end-of-lines into unix style end-of-lines, the program might read in one line, waiting for it to be available, check the end-of-line and write out the line with the desired end-of-line. Or the tool might read in all of the data, do a replacement call on each "wrong" end-of-line in memory, and then write out all of the data. With the first solution, piping speeds things up. With the second solution piping doesn't speed up anything.
The reason is is truly so hard to answer such a question is because the fix you need really depends on the code you have, the problem you are trying to solve, and the means by which you are solving it now. In the end, there isn't always a 100% guarantee that the code can be sped up; however, virtually every piece of code has opportunities to be sped up. Use profiling to speed up the parts that are slow, instead of wasting your time working on a part of your program that is only called once, and represents 0.001% of the program's runtime.
Remember if you speed up something that is 0.001% of your program's runtime by 50%, you actually only sped up your entire program by 0.0005%. Use profiling to determine the block of code that's taking up 90% of your runtime and concentrate on it.
I do have to wonder why, if A and B depend on each other to run, do you want them to be part of an independent toolset.
One solution is a compromise between the two:
Create a library that contains A.
Create a library that contains B.
Create a program that spawns two threads, 1 containing A and 2 containing B.
Create a semaphore that tells A to run and another that tells B to run.
After the function that calls A in 1, increment B's semaphore.
After the function that calls B in 2, increment A's semaphore.
Another possibility is to use file locking in your programs:
Make both A and B execute in infinite loops (or however many times you're processing data)
Add code to attempt to lock both files at the beginning of the infinite loop in A and B (if not, sleep and try again so that you don't do anything until you have the lock).
Add code to unlock and sleep for longer than the sleep in step 2 at the end of each loop.
Either of these solve the problem of having the overhead of launching the program between runs.
It's almost certainly not application startup which is the bottleneck. Linux will end up caching large portions of your programs, which means that launching will progressively get faster (to a point) the more times you start your program.
You need to look elsewhere for your bottleneck.

Simulating file system access

I am designing a file system in user space and need to test it. I do not want to use the available benchmarking tools as my requirements are different. So to test the file system I wish to simulate file access operation. To do this, I first use the ftw() function to walk through one f my existing file system(experimental) and list all the files and directories in a file.
Then I invoke a simulator to simulate file access by a number of processes. Thus, the simulator randomly starts a process i.e it forks a thread which does what a real process would have done. The thread randomly selects a file operation (read, write, rename etc) selects arguments to this operation from the list(generated by ftw()) . The thread does a number of such file operations and then exits marking the end of a process. The simulator continues to spawn threads; thread execution can overlap just as real processes do. Now, as operations are performed by threads, files get inserted, deleted, renamed and this is updated in the list of files.
I have not yet started coding. Does the plan seem sane? I am also not sure how to code the simulator...how will it spawn threads over a period of time. Should I be using some random delay to do this.
Thanks
Yep, that seems fairly reasonable to me. I would consider attempting to impose a statistical distribution over your file operations (and accesses to particular files) that is somehow matched to your expected workload. You might be able to find some statistics about typical filesystem workloads as a starting point.
That sounds about right for a decent test case just to make sure it's working. You could use sleep() to wait between spawning threads or just spawn them all at once and have them do an operation then wait a bit, then do another operation, etc... IMO if you hit it hard with a lot of requests and it works then there's a likely chance your filesystem will do just fine. Take an example from PostMark which all it does is append like crazy to different files and other benchmarks that do random access reads/writes in different locations to make sure that the page has to be read from disk.

Resources