When I use the cv2_imshow code of my custom dataset, I can view the results of detections on the image via Google Colaboratory. Now, I want to save this image to Google Drive.
v = Visualizer(im[:, :, ::-1], metadata=microcontroller_metadata, scale=1.2)
v = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2_imshow(v.get_image()[:, :, ::-1])
However, when I use the demo.py code provided by detectron2, I get results with kites and other classes which are COCO classes but not my custom classes
I use this code to run demo.py
!python demo.py --config-file detectron2/configs/COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml --input gdcnn/0_img_input/validate/validate{a}.jpg --confidence-threshold 0.2 --output path/to/googledrive/predictionfasterrcnn.jpg --opts MODEL.WEIGHTS output/model_final.pth
You can save file like this:
v.save(filepath)
or
cv2.imwrite(filepath, v.get_image()[:, :, ::-1])
Save the output file by using cv2 function to save image or instead use:
cv2.imwrite(filename, img)
Related
I would like to instantiate the project.toml that's build in in a Pluto notebook with the native package manager. How do I read it from the notebook?
Say, I have a notebook, e.g.,
nb_source = "https://raw.githubusercontent.com/fonsp/Pluto.jl/main/sample/Interactivity.jl"
How can I create a temporary environment, and get the packages for the project of this notebook? In particular, how do I complete the following code?
cd(mktempdir())
import Pkg; Pkg.activate(".")
import Pluto, Pkg
nb = download(nb_source, ".")
### Some code using Pluto's build in package manager
### to read the Project.toml from nb --> nb_project_toml
cp(nb_project_toml, "./Project.toml", force=true)
Pkg.instantiate(".")
So, first of all, the notebook you are looking at is a Pluto 0.17.0 notebook, which does not have the internal package manager. I think it was added in Pluto 0.19.0.
This is what the very last few cells look like in a notebook using the internal pluto packages:
# ╔═╡ 00000000-0000-0000-0000-000000000001
PLUTO_PROJECT_TOML_CONTENTS = """
[deps]
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
PlutoUI = "7f904dfe-b85e-4ff6-b463-dae2292396a8"
PyCall = "438e738f-606a-5dbb-bf0a-cddfbfd45ab0"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
[compat]
Plots = "~1.32.0"
PlutoUI = "~0.7.40"
PyCall = "~1.94.1"
"""
# ╔═╡ 00000000-0000-0000-0000-000000000002
PLUTO_MANIFEST_TOML_CONTENTS = """
# This file is machine-generated - editing it directly is not advised
julia_version = "1.8.0"
...
so you could add something like:
import(nb)
write("./Project.toml", PLUTO_PROJECT_TOML_CONTENTS)
This has the drawback of running all the code in your notebook, which might take a while.
Alternatively, you could read the notebook file until you find the # ╔═╡ 00000000-0000-0000-0000-000000000001 line and then either parse the following string yourself or eval everything after that (something like eval(Meta.parse(string_stuff_after_comment)) should do it...)
I hope that helps a little bit.
The Pluto.load_notebook_nobackup() reads the information of a notebook. This gives a dictionary of deps in the field .nbpkg_ctx.env.project.deps
import Pluto, Pkg
Pkg.activate(;temp=true)
nb_source = "https://raw.githubusercontent.com/fonsp/Pluto.jl/main/sample/PlutoUI.jl.jl"
nb = download(nb_source)
nb_info = Pluto.load_notebook_nobackup(nb)
deps = nb_info.nbpkg_ctx.env.project.deps
Pkg.add([Pkg.PackageSpec(name=p, uuid=u) for (p, u) in deps])
I am working on a project of Information Retrieval. For that I am using Google Colab. I am in the phase where I have computed some features ("input_features") and I have the labels ("labels") by doing a for loop, which took me about 4 hours to finish.
So at the end I have appended the results to an array:
input_features = np.array(input_features)
labels = np.array(labels)
So my question would be:
Is it possible to save those results in order to use them future purposes when using google colab?
I have found 2 options that maybe could be applied but I don't know where these files are created.
1) To save them as csv files. And my code would be:
from numpy import savetxt
# save to csv file
savetxt('input_features.csv', input_features, delimiter=',')
savetxt('labels.csv', labels, delimiter=',')
And in order to load them:
from numpy import loadtxt
# load array
input_features = loadtxt('input_features.csv', delimiter=',')
labels = loadtxt('labels.csv', delimiter=',')
# print the array
print(input_features)
print(labels)
But still I don't get something back when I print.
2) Save the results of an array by using pickle where I followed these instructions from here:
https://colab.research.google.com/drive/1EAFQxQ68FfsThpVcNU7m8vqt4UZL0Le1#scrollTo=gZ7OTLo3pw8M
from google.colab import files
import pickle
def features_pickeled(input_features, results):
input_features = input_features + '.txt'
pickle.dump(results, open(input_features, 'wb'))
files.download(input_features)
def labels_pickeled(labels, results):
labels = labels + '.txt'
pickle.dump(results, open(labels, 'wb'))
files.download(labels)
And to load them back:
def load_from_local():
loaded_features = {}
uploaded = files.upload()
for input_features in uploaded.keys():
unpickeled_features = uploaded[input_features]
loaded[input_features] = pickle.load(BytesIO(data))
return loaded_features
def load_from_local():
loaded_labels = {}
uploaded = files.upload()
for labels in uploaded.keys():
unpickeled_labels = uploaded[labels]
loaded[labels] = pickle.load(BytesIO(data))
return loaded_labes
#How do I print the pickled files to see if I have them ready for use???
When using python I would do something like this for pickle:
#Create pickle file
with open("name.pickle", "wb") as pickle_file:
pickle.dump(name, pickle_file)
#Load the pickle file
with open("name.pickle", "rb") as name_pickled:
name_b = pickle.load(name_pickled)
But the thing is that I don't see any files to be created in my google drive.
Is my code correct or do I miss some part of the code?
Long description in order to hopefully have explained in detail what I want to do and what I have done for this issue.
Thank you in advance for your help.
Google Colaboratory notebook instances are never guaranteed to have access to the same resources when you disconnect and reconnect because they are run on virtual machines. Therefore, you can't "save" your data in Colab. Here are a few solutions:
Colab saves your code. If the for loop operation you referenced doesn't take an extreme amount of time to run, just leave the code and run it every time you connect your notebook.
Check out np.save. This function allows you to save an array to a binary file. Then, you could re-upload your binary file when you reconnect your notebook. Better yet, you could store the binary file on Google Drive, mount your drive to your notebook, and reference it like that.
# Mount driver to authenticate yourself to gdrive
from google.colab import drive
drive.mount('/content/gdrive')
#---
# Import necessary libraries
import numpy as np
from numpy import savetxt
import pandas as pd
#---
# Create array
arr = np.array([1, 2, 3, 4, 5])
# save to csv file
savetxt('arr.csv', arr, delimiter=',') # You will see the results if you press in the File icon (left panel)
And then you can load it again by:
# You can copy the path when you find your file in the file icon
arr = pd.read_csv('/content/arr.csv', sep=',', header=None) # You can also save your result as a txt file
arr
I have a Markdown document that was generated using Knitr (literate programming). This markdown document gets converted to Microsoft Word (docx) and HTML using pandoc. Now I would like to include specific parts from the Markdown in HTML, and others in docx. The concrete use case is that I'm able to generate JS+HTML charts using rCharts which is fine for HTML, but obviously doesn't render in docx, so I would like to use a simple PNG image in that case.
Is there some specific pandoc syntax or trick that I can use for this?
So one way to solve this is to post-process the generated markdown from knitr.
I output some mustasche and then parse that using the R package whisker.
Roughly the code looks like:
md <- knit(rmd, envir=e)
docx.temp <- tempfile()
html.temp <- tempfile()
writeLines(whisker.render(readLines(md), list(html=T)), html.temp)
writeLines(whisker.render(readLines(md), list(html=F)), docx.temp)
docx <- pandoc(docx.temp, format="docx")
html <- pandoc(html.temp, format="html")
file.copy(docx, "./report.docx", overwrite=T)
file.copy(html, "./report.html", overwrite=T)
With the Rmd (knitr) containing something roughly like
{{^html}}
```{r}
WITHOUT HTML
```
{{/html}}
{{#html}}
```{r}
WITH HTML
```
{{/html}}
Nautilus shows me a thumbnail of a file, if its an image it will show me a preview, if its a video it will show a frame from the video, if its a document it will show me the application icon.
How can I access the image?
I see they are cached in ~/.thumbnail/ however they are all given unique names.
the thumbnail filename is an md5 of the filename. However the filename
is the absolute URI to the image (without a newline).
So you need to do:
echo -n 'file:///home/yuzem/pics/foo.jpg' | md5sum
And if it has spaces, you need to convert them to '%20', ex for "foo bar.jpg"
echo -n 'file:///home/yuzem/pics/foo%20bar.jpg' | md5sum
Found at Ubuntu forums. See also the Thumbnail Managing Standard document, linked from the freedesktop.org wiki.
Simple Python tool to calculate the thumbnail path. Written by Raja, shared as an ActiveState code recipe. Note, however, that this code does not escape filenames with spaces or special characters; which means this code does not work for all filenames.
"""Get the thumbnail stored on the system.
Should work on any linux system following the desktop standards"""
import hashlib
import os
def get_thumbnailfile(filename):
"""Given the filename for an image, return the path to the thumbnail file.
Returns None if there is no thumbnail file.
"""
# Generate the md5 hash of the file uri
file_hash = hashlib.md5('file://'+filename).hexdigest()
# the thumbnail file is stored in the ~/.thumbnails/normal folder
# it is a png file and name is the md5 hash calculated earlier
tb_filename = os.path.join(os.path.expanduser('~/.thumbnails/normal'),
file_hash) + '.png'
if os.path.exists(tb_filename):
return tb_filename
else:
return None
if __name__ == '__main__':
import sys
if len(sys.argv) < 2:
print('Usage: get_thumbnail.py filename')
sys.exit(0)
filename = sys.argv[1]
tb_filename = get_thumbnailfile(filename)
if tb_filename:
print('Thumbnail for file %s is located at %s' %(filename, tb_filename))
else:
print('No thumbnail found')
I guess that you need to access the thumbnail programatically. You want to use the Gio library.
I haven't been able to find a way to check for the thumbnail and, if it doesn't exist, go for the application icon, so you need to do it in two steps. Here you have a sample (sorry, Python. I'm not fluent in C):
import gio
import gtk
window = gtk.Window(gtk.WINDOW_TOPLEVEL)
window.show()
hbox = gtk.HBox()
hbox.show()
window.add(hbox)
f = gio.File(path='/home/whatever/you/want.jpg')
info = f.query_info('*')
# We check if there's a thumbnail for our file
preview = info.get_attribute_byte_string ("thumbnail::path")
image = None
if preview:
image = gtk.image_new_from_file (preview)
else:
# If there's no thumbnail, we check get_icon, who checks the
# file's mimetype, and returns the correct stock icon.
icon = info.get_icon()
image = gtk.image_new_from_gicon (icon, gtk.ICON_SIZE_MENU)
hbox.add (image)
window.show_all()
gtk.main()
The following data is uploaded to my GAE application -
How can I
get fields with files only
get filenames of the uploaded files?
get fields with files only
import cgi
values = self.request.POST.itervalues()
files = [v for v in values if isinstance(v, cgi.FieldStorage)]
get filenames of the uploaded files
filenames = [f.filename for f in files]
Edit: corrected snippet, now tested :)
Assuming the data is POSTed using a form, for #2, see Get original filename google app engine
For #1, you could iterate through the self.request.POST multidict and see anything that looks like a file. self.request.POST looks like this:
UnicodeMultiDict([(u'file_1', FieldStorage(u'file_1', u'filename_1')), (u'random_string_field', u'random_string_value')])
Hope that helps you out
-Sam
filename = self.request.POST['file'].filename
file_ext = self.request.POST['file'].type
OR
filename = self.request.params[<form element name with file>].filename