Octave select a file? - file

Does Octave have a good way to let the user select an input file? I've seen code like this for Matlab, but doesn't work in Octave.
A gui based method would be preferred, but some sort of command-line choice would work also. It would be great if there were some way to do this that would work in both Matlab and Octave.
I found this for Matlab but it does not work in Octave, even when you install Octave Forge Java package for the listdlg function. In Octave, dir() gives you:
647x1 struct array containing the fields:
name
date
bytes
isdir
datenum
statinfo
but I don't know how to convert this to an array of strings listdlg expects.

You have already the Octave Forge java package installed, so you can create instances of any java class and call any java method.
For example to create a JFileChooser and call the JFileChooser.showOpenDialog(Component parent) method:
frame = javaObject("javax.swing.JFrame");
frame.setBounds(0,0,100,100);
frame.setVisible(true);
fc = javaObject ("javax.swing.JFileChooser")
returnVal = fc.showOpenDialog(frame);
file = fc.getSelectedFile();
file.getName()
Btw. I had some troubles installing the package.
Here is a fix for Ubuntu. that worked also for my Debian Testing.
EDIT
#NoBugs In reply to your comment:
If you need to use listdlg you can do the following:
d = dir;
str = {d.name};
[sel,ok] = listdlg('PromptString','Select a file:',...
'SelectionMode','single',...
'ListString',str);
if ok == 1
disp(str{sel(1)});
end
This should be compatible with matlab, by I cannot test it right now.
If you want to select multiple files use this:
d = dir;
str = {d.name};
[sel,ok] = listdlg('PromptString','Select a file:',...
'SelectionMode','multiple',...
'ListString',str);
if ok == 1
imax = length(sel);
for i=1:1:imax
disp(str{sel(i)});
end
end

I never came across an open-file-dialog in octave.
If you are looking for a gui based method maybe guioctave can help you. I never used it, because it appears only be available for windows machines.
A possible solution would be to write a little script in octave, that would allow the user to parse through the directories and select a file like that.

Thought I'd provide an updated answer to this old question, since it is appearing in the 'related questions' field for other questions.
Octave provides the uigetdir and uigetfile functions, which do what you expect.

Related

Training with keras+tensorflow in python and predict in C/C++

It is possible to make a fast model with keras with tensor flow as backend and use it to predict in C or C++?
I need to do the prediction inside of a C++ program, but I feel much more comfortable doing the model and the training in keras.
In case you don't need to utilize a GPU in the environment you are deploying to, you could also use my library, called frugally-deep. It is available on GitHub and published under the MIT License: https://github.com/Dobiasd/frugally-deep
frugally-deep allows running forward passes on already-trained Keras models directly in C++ without the need to link against TensorFlow or any other backend.
It not only supports prediction with sequential models but also with more complex models build with the functional API.
Additionally to supporting many common layer types it can keep up with (and sometimes even beat) the performance of TensorFlow on a single CPU. You can find up-to-date benchmark results for some common model in the repo.
By automatic testing frugally-deep guarantees that the output of a model used with it in C++ is exactly the same as if run with Keras in Python.
Yes, it is possible. TensorFlow provides a stable C API as well as a C++ one.
For more details, you probably want to ask a more specific question.
You can use cv::dnn module in opencv3.2. See example in the opencv samples.
In simple steps you can do this -
Save Keras model into text files in python
In C++/C, read those text files, and run the model in a multithreading environment.
Above Steps in detail
For first step, call save_keras_model_as_text from keras2cpp.py on Github-
'''
save_keras_model_as_text(model, open('/content/modeltext.txt', 'w') )
save_keras_model_as_text(model, open('/content/modellayersonlytext.txt', 'w') , noweight=True)
'''
In c++/c projects, include keras.h /keras.cpp files.
convert input image into Keras format or read input image into Keras format from a file.
'''
//assume dst is pointer to an image object. This
//can be of CxImage / CImg /OpenCV or of any other
//library.
int h = dst->GetHeight();
int w= dst->GetWidth();;
int *img = new int[h*w*3];
int *img_r = img, *img_g =&img[h*w], *img_b=&img[2*h*w];
for (int y = 0; y < h; y++) {
for (int x = 0; x < w;x++) {
RGBQUAD rgb= dst->GetPixelColor(x,y);
*img_r= rgb.rgbRed; img_r++;
*img_g= rgb.rgbGreen; img_g++;
*img_b = rgb.rgbBlue; img_b++;
}
}
'''
Call ExecuteKerasSegmentation.
'''
//text files in which keras model is saved.
char *modelfile ="modellayersonlytext.txt";
char *weightfile="modeltext.txt";
int *result = ExecuteKerasSegmentation(img, h, w, 3, modelfile, weightfile);
'''
result contains the segmentation result. You may save it into pgm file or convert it into your image library object and use it further.
'''
save_image_pgm("segmentation_map.pgm",result,h,w,127); //127 is scaling factor for binary images.
'''

Icelandic language process in R

I'm a student doing some research with R.
I tried to put some icelandic language in array but R automatically convert this to english.
artist = vector()
artist[1] = "CHVRCHES"
artist[2] = "Fall-Out-Boy"
artist[3] = "Green-day"
artist[4] = "Sigur-Rós"
When I try to call 4th item of 'artist' array, console's output is like
Sigur-Ros
not
Sigur-Rós
Thus, I looked out for some question that might help me with encoding mess like
artist[4] = stri_conv("Sigur-Rós","","UTF-8")
or
artist[4] = iconv("Sigur-Rós","","UTF-8")
But console showed the same output.
I'm doing this on Rstudio and my R version is 3.1.2 . Workspace is Windows 8.1, 64-bit.
Can anyone know how to deal with this encoding problem? I really need some help.

Octave won't exort LaTex symbols

I'm having a problem where Octave will render figures just fine in the figure box, but then refuses to properly export to PNG when I use the print() command. This is also true when I try other formats like EPS or JPG.
My current version of Octave is 3.8.1-1ubuntu1, which is up to date at the time of this post. My Ubuntu version is also 14.04. I do not receive any error messages when the code runs.
The script commands used to plot are pretty basic. For example:
linewidth = 4;
xStr = 'Particle Diameter (\mum)';
yStr = 'Scattering Cross-Section (\mum^2)';
FontName = 'Times New Roman';
LabelFontSize = 22;
AxisFontSize = 18;
F1 = figure(1);
clf('reset');
plot(diameter*1e6,sigma_0*1e12,'k','linewidth',linewidth);
hold on
plot(diameter*1e6,sigma_1*1e12,'r','linewidth',linewidth);
X = xlabel(xStr);
set(X,'FontName',FontName,'fontsize',LabelFontSize);
Y = ylabel(yStr);
set(Y,'FontName',FontName,'fontsize',LabelFontSize);
axis([xMin xMax sigMin sigMax]);
set(gca,'fontsize',AxisFontSize,'linewidth',2);
legend('2.0 \mum','3.8 \mum',4);
print(F1,'Mie.png','-dpng');
The strange thing is that I have other images from months ago that rendered the LaTex bits just fine, and even used nearly identical code. That almost seems like some recent software upgrade may have killed my plotting.
I appreciate any help you can give me. This issue is driving me nuts.
This is a known problem when using the OpenGL toolkits (graphics_toolkit FLTK) which is default in octave3.8.x. Previous versions used gnuplot for printing.
So you have two choices:
Switch back to gnuplot with "graphics_toolkit gnuplot" before doing any plotting. You may also add this to your .octaverc so it's set every time you start octave
Use LaTex output: http://wiki.octave.org/Printing_with_FLTK

Saving type information of datastructures

I am building a framework for my day-to-day tasks. I am programming in scala using a lot of type parameter.
Now my goal is to save datastructures to files (e.g. xml files). But I realized that it is not possible using xml files. As I am new to this kind of problem I am asking:
Is there a way to store the types of my datastructures in a file??? Is there a way in scala???
Okay guys. You did a great job basicly by naming the thing I searched for.
Its serialization.
With this in mind I searched the web and was completly astonished by this feature of java.
Now I do something like:
object Serialize {
def write[A](o: A): Array[Byte] = {
val ba = new java.io.ByteArrayOutputStream(512)
val out = new java.io.ObjectOutputStream(ba)
out.writeObject(o)
out.close()
ba.toByteArray()
}
def read[A](buffer: Array[Byte]): A = {
val in = new java.io.ObjectInputStream(new java.io.ByteArrayInputStream(buffer))
in.readObject().asInstanceOf[A]
}
}
The resulting Byte-Arrays can be written to a file and everthing works well.
And I am totaly fine that this solution is not human readable. If my mind changes someday. There are JSON-parser allover the web.

Plotting a word-cloud by date for a twitter search result? (using R)

I wish to search twitter for a word (let's say #google), and then be able to generate a tag cloud of the words used in twitts, but according to dates (for example, having a moving window of an hour, that moves by 10 minutes each time, and shows me how different words gotten more often used throughout the day).
I would appreciate any help on how to go about doing this regarding: resources for the information, code for the programming (R is the only language I am apt in using) and ideas on visualization. Questions:
How do I get the information?
In R, I found that the twitteR package has the searchTwitter command. But I don't know how big an "n" I can get from it. Also, It doesn't return the dates in which the twitt originated from.
I see here that I could get until 1500 twitts, but this requires me to do the parsing manually (which leads me to step 2). Also, for my purposes, I would need tens of thousands of twitts. Is it even possible to get them in retrospect?? (for example, asking older posts each time through the API URL ?) If not, there is the more general question of how to create a personal storage of twitts on your home computer? (a question which might be better left to another SO thread - although any insights from people here would be very interesting for me to read)
How to parse the information (in R)? I know that R has functions that could help from the rcurl and twitteR packages. But I don't know which, or how to use them. Any suggestions would be of help.
How to analyse? how to remove all the "not interesting" words? I found that the "tm" package in R has this example:
reuters <- tm_map(reuters, removeWords, stopwords("english"))
Would this do the trick? I should I do something else/more ?
Also, I imagine I would like to do that after cutting my dataset according to time (which will require some posix-like functions (which I am not exactly sure which would be needed here, or how to use it).
And lastly, there is the question of visualization. How do I create a tag cloud of the words? I found a solution for this here, any other suggestion/recommendations?
I believe I am asking a huge question here but I tried to break it to as many straightforward questions as possible. Any help will be welcomed!
Best,
Tal
Word/Tag cloud in R using "snippets" package
www.wordle.net
Using openNLP package you could pos-tag the tweets(pos=Part of speech) and then extract just the nouns, verbs or adjectives for visualization in a wordcloud.
Maybe you can query twitter and use the current system-time as a time-stamp, write to a local database and query again in increments of x secs/mins, etc.
There is historical data available at http://www.readwriteweb.com/archives/twitter_data_dump_infochimp_puts_1b_connections_up.php and http://www.wired.com/epicenter/2010/04/loc-google-twitter/
As for the plotting piece: I did a word cloud here: http://trends.techcrunch.com/2009/09/25/describe-yourself-in-3-or-4-words/ using the snippets package, my code is in there. I manually pulled out certain words. Check it out and let me know if you have more specific questions.
I note that this is an old question, and there are several solutions available via web search, but here's one answer (via http://blog.ouseful.info/2012/02/15/generating-twitter-wordclouds-in-r-prompted-by-an-open-learning-blogpost/):
require(twitteR)
searchTerm='#dev8d'
#Grab the tweets
rdmTweets <- searchTwitter(searchTerm, n=500)
#Use a handy helper function to put the tweets into a dataframe
tw.df=twListToDF(rdmTweets)
##Note: there are some handy, basic Twitter related functions here:
##https://github.com/matteoredaelli/twitter-r-utils
#For example:
RemoveAtPeople <- function(tweet) {
gsub("#\\w+", "", tweet)
}
#Then for example, remove #d names
tweets <- as.vector(sapply(tw.df$text, RemoveAtPeople))
##Wordcloud - scripts available from various sources; I used:
#http://rdatamining.wordpress.com/2011/11/09/using-text-mining-to-find-out-what-rdatamining-tweets-are-about/
#Call with eg: tw.c=generateCorpus(tw.df$text)
generateCorpus= function(df,my.stopwords=c()){
#Install the textmining library
require(tm)
#The following is cribbed and seems to do what it says on the can
tw.corpus= Corpus(VectorSource(df))
# remove punctuation
tw.corpus = tm_map(tw.corpus, removePunctuation)
#normalise case
tw.corpus = tm_map(tw.corpus, tolower)
# remove stopwords
tw.corpus = tm_map(tw.corpus, removeWords, stopwords('english'))
tw.corpus = tm_map(tw.corpus, removeWords, my.stopwords)
tw.corpus
}
wordcloud.generate=function(corpus,min.freq=3){
require(wordcloud)
doc.m = TermDocumentMatrix(corpus, control = list(minWordLength = 1))
dm = as.matrix(doc.m)
# calculate the frequency of words
v = sort(rowSums(dm), decreasing=TRUE)
d = data.frame(word=names(v), freq=v)
#Generate the wordcloud
wc=wordcloud(d$word, d$freq, min.freq=min.freq)
wc
}
print(wordcloud.generate(generateCorpus(tweets,'dev8d'),7))
##Generate an image file of the wordcloud
png('test.png', width=600,height=600)
wordcloud.generate(generateCorpus(tweets,'dev8d'),7)
dev.off()
#We could make it even easier if we hide away the tweet grabbing code. eg:
tweets.grabber=function(searchTerm,num=500){
require(twitteR)
rdmTweets = searchTwitter(searchTerm, n=num)
tw.df=twListToDF(rdmTweets)
as.vector(sapply(tw.df$text, RemoveAtPeople))
}
#Then we could do something like:
tweets=tweets.grabber('ukgc12')
wordcloud.generate(generateCorpus(tweets),3)
I would like to answer your question in making big word cloud.
What I did is
Use s0.tweet <- searchTwitter(KEYWORD,n=1500) for 7 days or more, such as THIS.
Combine them by this command :
rdmTweets = c(s0.tweet,s1.tweet,s2.tweet,s3.tweet,s4.tweet,s5.tweet,s6.tweet,s7.tweet)
The result:
This Square Cloud consists of about 9000 tweets.
Source: People voice about Lynas Malaysia through Twitter Analysis with R CloudStat
Hope it help!

Resources