Text Detection using tensorflowjs - tensorflow.js

I want to do text detection in an image using only tensorfow.js or opencv.js, i have already build a EAST model on keras and converted to tensorflowjs model
can anyone help me with this, any resource will be great
Thanks.

So, initially you need to download the East frozen model and then conver it to tensorflow.js model by using the below command
tensorflowjs_converter --input_format=tf_frozen_model --output_node_names='feature_fusion/Conv_7/Sigmoid,feature_fusion/concat_3' /path_to_model /path_to_where_you_want_save_converted_model.
Next after taking an input image and loading the model the below code will detect the text is there or not
$("#predict-button").click(async function () {
let image = $("#selected-image").get(0);
let tensor = tf.browser.fromPixels(image)
.resizeNearestNeighbor([640, 320])
.expandDims(0);
tensor = tf.cast(tensor, 'float32')
const [output1, output2] = await model.predict(tensor);
const data1 = await output1.data();
const data2 = await output2.data();
As the east model gives two outputs i.e. scores and geometry. so here data1 will give the geometry (which I ignored because my end goal was to detect if the text is present not to localize it) and data2 will give the scores.
Next, I put a threshold of 0.5 to differentiate between the text is present or not. if the probability is greater than 0.5 then the text is present and if less than 0.5 than the text is not present on text.
Note: for now, I have skipped the preprocessing step (except resize) where they subtract the mean RGB value from an RGB value of input image.

Related

AzureMap : Real Time Alert pop up

I am new to AzureMap with limited knowledge of JavaScript and looking help in getting the real time alert based on some random flag during the fleet movement based on the co ordinates .
I tried multiple sources to design it like followed Sample code enter link description here
My requirement is :
Pip should pop up or appear only on arrival of the fleet (truck) .
Thanks
To verify, when the truck gets to the end of the line you want to open a popup. I'm assuming you have a constant flow of data updating the truck position and that you can easily grab the trucks coordinate. As such you would only need a function to determine if the truck coordinate is at the end of the route line. You will likely need to account for a margin of error (i.e. within 15 meters of the end of the line) as a single coordinate can represent a single molecule with enough decimal places and GPS devices typically have an accuracy of +/- 15 meters. With this in mind all you would need to do is calculate the distance from the truck coordinate to the last coordinate of the route line. For example:
var lastRouteCoord = [-110, 45];
var truckCoord = [-110.0001, 45.0001];
var minDistance = 15;
//Get the distance between the coordinates (by default this function returns a distance in meters).
var distance = atlas.math.getDistanceTo(lastRouteCoord, truckCoord);
if(distance <= minDistance){
//Open popup.
//Examples: https://azuremapscodesamples.azurewebsites.net/index.html#Popups
}

How to visualize LabelMe database using Matlab

The LabelMe database can be downloaded from http://www.cs.toronto.edu/~norouzi/research/mlh/data/LabelMe_gist.mat
However, there is another link http://labelme.csail.mit.edu/Release3.0/
The webpage has a toolbox but I could not find any database to download. So, I was wondering if I could use the LabelMe_gist.mat which has the following fields. The field names contins the labels for the images, and img perhaps contains the images. How do I display the training and test images? I tried
im = imread(img)
Error using imread>parse_inputs (line 486)
The filename or url argument must be a string.
Error in imread (line 336)
[filename, fmt_s, extraArgs, msg] = parse_inputs(varargin{:});
but surely this is not the way. Please help
load LabelMe_gist.mat;
load('LabelMe_gist.mat', 'img')
Since we had no idea from your post what kind of data this is I went ahead and downloaded it. Turns out, img is a collection of 22019 images that are of size 32x32 (RGB). This is why img is a 32 x 32 x 3 x 22019 variable. Therefore, the i-th image is accessible via imshow(img(:,:,:,i));
Here is an animation of all of them (press Ctrl+C to interrupt):
for iImage = 1:size(img,4)
figure(1);clf;
imshow(img(:,:,:,iImage));
drawnow;
end

FSharpChart with Windows.Forms very slow for many points

I use code like the example below to do basic plotting of a list of values from F# Interactive. When plotting more points, the time taken to display increases dramatically. In the examples below, 10^4 points display in 4 seconds whereas 4.10^4 points take a patience-testing 53 seconds to display. Overall it's roughly as if the time to plot N points is in N^2.
The result is that I'll probably add an interpolation layer in front of this code, but
1) I wonder if someone who knows the workings of FSharpChart and Windows.Forms could explain what is causing this behaviour? (The data is bounded so one thing that seems to rule out is the display needing to adjust scale.)
2)Is there a simple remedy other than interpolating the data myself?
let plotl (f:float list) =
let chart = FSharpChart.Line(f, Name = "")
|> FSharpChart.WithSeries.Style(Color = System.Drawing.Color.Red, BorderWidth = 2)
let form = new Form(Visible = true, TopMost = true, Width = 700, Height = 500)
let ctl = new ChartControl(chart, Dock = DockStyle.Fill)
form.Controls.Add(ctl)
let z1 = [for i in 1 .. 10000 do yield sin(float(i * i))]
let z2 = [for i in 1 .. 20000 do yield sin(float(i * i))]
plotl z1
plotl z2
First of all, FSharpChart is a name used in an older version of the library. The latest version is called F# Charting, comes with a new documentation and uses just Chart.
To answer your question, Chart.Line and Chart.Points are quite slow for large number of points. The library also has Chart.FastLine and Chart.FastPoints (which do not support as many features, but are faster). So, try getting the latest version of F# Charting and using the "Fast" version of the method.

How to convert a byte[] to a BufferedImage in Java?

I'm posting this thread because I have some difficulties to deal with pictures in Java. I would like to be able to convert a picture into a byte[] array, and then to be able to do the reverse operation, so I can change the RGB of each pixel, then make a new picture. I want to use this solution because setRGB() and getRGB() of BufferedImage may be too slow for huge pictures (correct me if I'm wrong).
I read some posts here to obtain a byte[] array (such as here) so that each pixel is represented by 3 or 4 cells of the array containing the red, the green and the blue values (with the additional alpha value, when there are 4 cells), which is quite useful and easy to use for me. Here's the code I use to obtain this array (stored in a PixelArray class I've created) :
public PixelArray(BufferedImage image)
{
width = image.getWidth();
height = image.getHeight();
DataBuffer toArray = image.getRaster().getDataBuffer();
array = ((DataBufferByte) toArray).getData();
hasAlphaChannel = image.getAlphaRaster() != null;
}
My big trouble is that I haven't found any efficient method to convert this byte[] array to a new image, if I wanted to transform the picture (for example, remove the blue/green values and only keeping the red one). I tried those solutions :
1) Making a DataBuffer object, then make a SampleModel, to finally create a WritableRaster and then BufferedImage (with additional ColorModel and Hashtable objects). It didn't work because I apparently don't have all the information I need (I have no idea what's the Hashtable for BufferedImage() constructor).
2) Using a ByteArrayInputStream. This didn't work because the byte[] array expected with ByteArrayInputStream has nothing to do with mine : it represents each byte of the file, and not each component of each pixel (with 3-4 bytes for each pixel)...
Could someone help me?
Try this:
private BufferedImage createImageFromBytes(byte[] imageData) {
ByteArrayInputStream bais = new ByteArrayInputStream(imageData);
try {
return ImageIO.read(bais);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
I have tried the approaches mentioned here but for some reason neither of them worked. Using ByteArrayInputStream and ImageIO.read(...) returns null, whereas byte[] array = ((DataBufferByte) image.getRaster().getDataBuffer()).getData(); returns a copy of the image data, not a direct reference to them (see also here).
However, the following worked for me. Let's suppose that the dimensions and the type of the image data are known. Let also byte[] srcbuf be the buffer of the data to be converted into BufferedImage. Then,
Create a blank image, for example
img=new BufferedImage(width, height, BufferedImage.TYPE_3BYTE_BGR);
Convert the data array to Raster and use setData to fill the image, i.e.
img.setData(Raster.createRaster(img.getSampleModel(), new DataBufferByte(srcbuf, srcbuf.length), new Point() ) );
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_3BYTE_BGR);
byte[] array = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(pixelArray, 0, array, 0, array.length);
This method does tend to get out of sync when you try to use the Graphics object of the resulting image. If you need to draw on top of your image, construct a second image (which can be persistant, i.e. not constructed every time but re-used) and drawImage the first one onto it.
Several people upvoted the comment that the accepted answer is wrong.
If the accepted answer isn't working, it may be because Image.IO doesn't have support for the type of image you're trying, for example tiff images.
To make it work, you need to add an extra jar to handle the image type.
You can add jai-imageio-core-1.3.1.jar to your classpath with:
<!-- https://mvnrepository.com/artifact/com.github.jai-imageio/jai-imageio-core -->
<dependency>
<groupId>com.github.jai-imageio</groupId>
<artifactId>jai-imageio-core</artifactId>
<version>1.3.1</version>
</dependency>
To add support for:
wbmp
bmp
pcx
pnm
raw
tiff
gif (write)
You can check the list of supported formats with:
for(String format : ImageIO.getReaderFormatNames())
System.out.println(format);
Note that you only have to drop the jar (jai-imageio-core-1.3.1.jar for example) into your classpath to make it work.
Other projects that add additional support for image types include:
https://github.com/haraldk/TwelveMonkeys
https://github.com/geosolutions-it/imageio-ext
The approach by using ImageIO.read directly is not right in some cases. In my case, the raw byte[] doesn't contain any information about the width and height and format of the image. By only using ImageIO.read, It is impossible for the program to construct a valid image.
It is necessary to pass the basic information of the image to BufferedImage object:
BufferedImage outBufImg = new BufferedImage(width, height, bufferedImage.TYPE_3BYTE_BGR);
Then set the data for the BufferedImage object by using setRGB or setData. (When using setRGB, it seems we must convert byte[] to int[] first. As a result, it may cause performance issues if the source image data is big. Maybe setData is a better idea for big byte[] typed source data.)

WPF image vector format export (XPS?)

Our tool allows export to PNG, which works very nicely.
Now, I would like to add export to some vector format. I tried XPS, but the results are not satisfying at all.
Take a look at a comparison http://www.jakubmaly.cz/xps-vs-png.png.
The picture on the left comes from an XPS export, the picture on the right from PNG export, the XPS picture is visibly blurred when opened in XPS Viewer and zoomed 100%.
Are there any settings that I am missing or why is it so?
Thanks,
Jakub.
A sample xps output can be found here: http://www.jakubmaly.cz/files/a.xps.
This is the code that does the XPS export:
if (!boundingRectangle.HasValue)
{
boundingRectangle = new Rect(0, 0, frameworkElement.ActualWidth, frameworkElement.ActualHeight);
}
// Save current canvas transorm
Transform transform = frameworkElement.LayoutTransform;
// Temporarily reset the layout transform before saving
frameworkElement.LayoutTransform = null;
// Get the size of the canvas
Size size = new Size(boundingRectangle.Value.Width, boundingRectangle.Value.Height);
// Measure and arrange elements
frameworkElement.Measure(size);
frameworkElement.Arrange(new Rect(size));
// Open new package
System.IO.Packaging.Package package = System.IO.Packaging.Package.Open(filename, FileMode.Create);
// Create new xps document based on the package opened
XpsDocument doc = new XpsDocument(package);
// Create an instance of XpsDocumentWriter for the document
XpsDocumentWriter writer = XpsDocument.CreateXpsDocumentWriter(doc);
// Write the canvas (as Visual) to the document
writer.Write(frameworkElement);
// Close document
doc.Close();
// Close package
package.Close();
// Restore previously saved layout
frameworkElement.LayoutTransform = transform;
Interesting (and annoying) issue - you may want to check out the lengthy answer from Jo0815 to Printing XpsDocument causes resampled images (96dpi?) - FixedDocument prints sharp, quoting a Microsoft support response - a couple of excerpts:
Some vector features from WPF cannot be emulated in our GDI code and
we resort to converting subsets of the scene to GDI bitmaps. These
bitmaps are the cause of the blurred zooming.
[...]
These bitmaps are the cause of the blurred zooming. The problem is
that the WPF is being rasterised to a bitmap at the -wrong resolution.
The print path is designed to rasterise unsupported features into a
bitmap, but it is supposed to do it at device resolution. Instead the
rasterisation is always being done at 96dpi. That's fine for a screen
but produces blurred output for a 600dpi printer. [emphasis mine]
Please note that the latter will apply for nowadays higher DPI screens as well of course, I've encountered blurring like this various times already - do you by chance use a high DPI monitor?
Now, apparently Microsoft is not entirely in control of the apparatus regarding this:
Additionally the problem only occurs when printing XPS and isn't a
problem when printing XAML directly. I'm pretty sure there is
documentation somewhere that says XPS will print at device resolution.
[...] It is something we
plan to improve in the next version of the product but not for Win 7.
The problem is that when printing XAML it will correctly render the
image at 600dpi, but when printing XPS it will still render the image
at 96dpi. Since XAML is converted to XPS before printing it seems
highly odd that one method of printing XPS produces different results
to another method of printing XPS. [emphasis mine]
[...]
There is no UI to configure the XPS Document Writer DPI. If you later
print a generated XPS document at a different DPI from the writers
internal default you may get poor results for bitmap content. With GDI
printers you can control the final DPI and your final desitination is
usally paper - no chance to reprint the document.
Conclusion
In conclusion, I'd still try to adjust PrintTicket.PageResolution Property within Néstor Sánchez' approach (+1), if your use case does allow this (though I remotely recall reading somewhere, that this doesn't have any effect as well); section Bitmap Resolution and Pixel Format in Using the XPS Rasterization Service confirms the issue he encountered with FixedDocument:
XPS rasterizer object for a fixed page must know the resolution at
which the page will be rendered. The XPSDrv filter specifies this
resolution, in dots per inch (DPI), as an input parameter [...] For example, if a display device has a resolution
of 600 DPI, and a fixed page describes a standard letter-size page, a
bitmap image of the entire page has the following dimensions [...]
Workaround
As a potential workaround you might want to explore alexandrud's solution for the related question How to convert a XPS file to an image in high quality (rather than blurry low resolution)?, which recommends using xps2img, a XPS (XML Paper Specification) document to set of images conversion utility. In particular it Allows to specify images size or DPI, which might help depending on the print path solution applied in turn.
Good luck!
I've had a similar problem. My image was very blurry when passed to XPS intermediated thru a FixedDocument.
The solution was to write the image directly to the XPS...
/// <summary>
/// Saves the supplied visual Source, within the specified Bounds, as XPS in the specified File-Name.
/// Returns error message or null when succeeded.
/// </summary>
public static string SaveVisualAsXPS(Visual Source, Size Bounds, string FileName)
{
string ErrorMessage = null;
try
{
using (var Container = Package.Open(FileName, FileMode.Create))
{
using (var TargetDocument = new XpsDocument(Container, CompressionOption.Maximum))
{
var Writer = XpsDocument.CreateXpsDocumentWriter(TargetDocument);
var Ticket = GetPrintTicketFromPrinter();
if (Ticket == null)
return "No printer is defined.";
Ticket.PageMediaSize = new PageMediaSize(Bounds.Width, Bounds.Height);
var SourceVisual = Source;
Writer.Write(SourceVisual, Ticket);
}
}
}
catch (Exception Problem)
{
ErrorMessage = "Cannot export document to XPS.\nProblem: " + Problem.Message;
}
return ErrorMessage;
}
Giving a print-ticket with the exact width and height avoids scaling (that was I wanted in my case).
Get the function from the example in:
http://msdn.microsoft.com/en-us/library/system.printing.printticket.aspx

Resources