The ML.net prediction has HUGE different compared with Custom Vision - wpf

I've trained a model(object detection) using Azure Custom Vision, and export the model as ONNX,
then import the model to my WPF(.net core) project.
I use ML.net to get prediction from my model, And I found the result has HUGE different compared with the prediction I saw on Custom Vision.
I've tried different order of extraction (ABGR, ARGB...etc), but the result is very disappointed, can any one give me some advice as there are not so much document online about Using Custom Vision's ONNX model with WPF to do object detection.
Here's some snippet:
// Model creation and pipeline definition for images needs to run just once, so calling it from the constructor:
var pipeline = mlContext.Transforms
.ResizeImages(
resizing: ImageResizingEstimator.ResizingKind.Fill,
outputColumnName: MLObjectDetectionSettings.InputTensorName,
imageWidth: MLObjectDetectionSettings.ImageWidth,
imageHeight: MLObjectDetectionSettings.ImageHeight,
inputColumnName: nameof(MLObjectDetectionInputData.Image))
.Append(mlContext.Transforms.ExtractPixels(
colorsToExtract: ImagePixelExtractingEstimator.ColorBits.Rgb,
orderOfExtraction: ImagePixelExtractingEstimator.ColorsOrder.ABGR,
outputColumnName: MLObjectDetectionSettings.InputTensorName))
.Append(mlContext.Transforms.ApplyOnnxModel(modelFile: modelPath, outputColumnName: MLObjectDetectionSettings.OutputTensorName, inputColumnName: MLObjectDetectionSettings.InputTensorName));
//Create empty DataView. We just need the schema to call fit()
var emptyData = new List<MLObjectDetectionInputData>();
var dataView = mlContext.Data.LoadFromEnumerable(emptyData);
//Generate a model.
var model = pipeline.Fit(dataView);
Then I use the model to create context.
//Create prediction engine.
var predictionEngine = _mlObjectDetectionContext.Model.CreatePredictionEngine<MLObjectDetectionInputData, MLObjectDetectionPrediction>(_mlObjectDetectionModel);
//Load tag labels.
var labels = File.ReadAllLines(LABELS_OBJECT_DETECTION_FILE_PATH);
//Create input data.
var imageInput = new MLObjectDetectionInputData { Image = this.originalImage };
//Predict.
var prediction = predictionEngine.Predict(imageInput);

Can you check on the image input (imageInput) is resized with the same size as in the model requirements when you prepare the pipeline for both Resize parameters:
imageWidth: MLObjectDetectionSettings.ImageWidth,
imageHeight: MLObjectDetectionSettings.ImageHeight.
Also for the ExtractPixels parameters especially on the ColorBits and ColorsOrder should follow the model requirements.
Hope this help
Arif

Maybe because the aspect ratio is not preserved during the resize.
Try with an image with the size of:
MLObjectDetectionSettings.ImageWidth * MLObjectDetectionSettings.ImageHeight
And you will see much better results.
I think Azure does preliminary processing on the image, maybe Padding (also during training?), or Cropping.
Maybe during the processing it also uses a moving window(the size that the model expects) and then do some aggregation

Related

How to update UI with a viewModel for byte arrays and lists in Kotlin/Jetpack Compose?

So the problem I am facing is that when the viewModel data updates it doesn't seems to update the state of my bytearrays: ByteArray by (mutableStateOf) and mutableListOf()
When I change pages/come back to the page it does update them again. How can get the view to update for things like lists and bytearrays
Is mutableStateOf the wrong way to update bytearrays and lists? I couldn't really find anything useful.
Example of byte array that doesn't work (using this code with a Float by mutableStateOf works!).
How I retrieve the data in the #Composable:
val Data = BluetoothMonitoring.shared.Data /*from a viewModel class, doesn't update when Data / bytearray /list changes, only when switching pages in app.*/
Class:
class BluetoothMonitoring : ViewModel(){
companion object {
val shared = BluetoothMonitoring()
}
var Data : ByteArray by mutableStateOf( ByteArray(11) { 0x00 })
}
Any help is appreciated and thanks in advance!
You seem to come from IOS/Swift where using Shared objects is a common pattern
In Android, you're not supposed to create a viewmodel instance by hand, you are supposed to use ViewModelFactory, or compose viewModel() function. The ViewModel object basically preserves some data for you across activity recompositions, i.e. when activity is paused, resumed etc
The general good way of structuring your app is using UDF (Unidirectional Data Flow) where, data flows upwards towards the view/composable and events flow downwards towards the viewmodel, data layer etc. We use something called as Flow which is a reactive data stream, which updates the listener whenever some value changes
Keeping in mind these two points, I've created a very brief way of how you could restructure your code, so that it almost always works. Please adapt accordingly to your logic
class MyViewModel: ViewModel(){
// Declare a flow in the viewModel
var myData = MutableStateFlow(0)
fun modifyMyData(newData: Int){
myData = newData
}
}
And in your composable view layer
#Composable
fun YourComposable(){
val myViewModel = viewModel()
val myUiState by myViewModel.myData.collectAsState()
// Now use your value, and change it, it will be changed accordingly and updated everywhere
}
I also recommend reading this codelab
How I handle byte arrays over Bluetooth with kotlin and Android.
I'm talking sic (communication) to arduino over bluetooth in the app I'm making. Kotlin likes Byte and Arduino likes UBYTE so I do all of those translations in the app because Kotlin threads are inline code easy and the phone has more power for such things.
Add toByte() to everthing that is outbound. <- (actual solution)
outS[8] = Color.alpha(word.color).toByte()
outS[9] = Color.red(word.color).toByte()
outS[10] = Color.green(word.color).toByte()
outS[11] = Color.blue(word.color).toByte()
outS[12] = 0.toByte()
outS[13] = 232.toByte()
outS[14] = 34.toByte()
outS[15] = 182.toByte()
//outS[16] = newline working without a newLine!
// outS[16] = newline
val os = getMyOutputStream()
if (os != null) {
os.write(outS)
}
For inbound data...I have to change everything to UByte. <- (actual solution)
val w: Word = wordViewModel.getWord(i)
if (w._id != 0) {
w.rechecked = (byteArray[7].toInt() != 0)
w.recolor = Color.argb(
byteArray[8].toUByte().toInt(),
byteArray[9].toUByte().toInt(),
byteArray[10].toUByte().toInt(),
byteArray[11].toUByte().toInt()
)
//finally update the word
wordViewModel.update(w)
}
My Youtube channel is about model trains not coding there some android and electronics in there. And a video of bluetooth arduino android is coming soon like tonight or tomorrow soon. I've been working on the app for just over a month using it to learn Kotlin. The project is actually working but I have to do the bluetooth connection manually the app actually controls individual neopixels for my model train layout.

Save Images in Room Database

I want to save small images in my room database and I have two issues:
How do I save an image in my database?
How do I save multiple images in my database?
I tried saving a bitmap in the way recommended by the developers page (https://developer.android.com/training/data-storage/room/defining-data)
#Parcelize
#Entity(tableName = "image_table")
data class ImgMod(
#PrimaryKey(autoGenerate = true)
var invoiceId: Long = 0L,
#ColumnInfo(name = "image")
var single_img: Bitmap?
): Parcelable
However, I receive the following error:
Cannot figure out how to save this field into database. You can consider adding a type converter for it.
Secondly, I would like to save multiple images in one database entry. But I receive the same error with the following snippets:
#ColumnInfo(name = "imageList")
var img_list: ArrayList<BitMap>
or decoded bitmap as a String
#ColumnInfo(name = "imageList")
var decoded_img_list: ArrayList<String>
I am sorry if this is a very basic question. But how do I have to configure the database/process the data to an image list?
Thank you in advance,
rot8
A very simple way to get images into the database (although I personally discourage it) would be to base-64 encode the bitmaps into a String and put it into a row of the database.
Take in account that bitmaps are very memory heavy; and base64 encoding something increases it's size some more so be careful when loading a bunch of images... I do also think Room and SQLite supports binary data as blobs, so you could just declare a column as ByteArray and it should just work.
What I've been doing in my projects is to write them into the internal or external storage of my app and then store the reference Uri as a String to later be able to retrieve the image from disk.
Something that I discourage even more is to stuff more than one value per row; having a list of stuff inside a coulmn in SQL is definetely not a pattern you should follow. Creating a "join" table should be simple enough or simply an extra column you could use to group them by should be easy enough, right?

Displaying Parse Data to ContainerList

I want to display data from Parse in a list from GamesScores class using Container in Codename One, this is what I've tried so far and it's not showing anything nor giving any errors:
Container container = findListCont();
container.setLayout(BoxLayout.y());
container.setScrollableY(true);
ParseQuery<ParseObject> query = ParseQuery.getQuery("GameScore");
List<ParseObject> results = (List<ParseObject>) query.find();
System.out.println("Size: " + results.size());
container.addComponent(results, f);
Please help me out, I'm a new in Codename One. If there tutorials on it, please share or anything to help me achieve the desired results.
I'm actually shocked this isn't failing. You are using the add constraint to place the object result as a constraint and you add the form object into the container...
You need to loop over the results and convert them to components to add into the layout. It also seems that you are using the old GUI builder which I would recommend against.
Generally something like this rough pseudo code should work assuming you are using a box Y layout:
for(ParseObject o : results) {
MultiButton mb = new MultiButton(o.getDisplayValue());
f.add(mb);
}
f.revalidate();

Synchronously play videos from arrays using actionscript 3

I am developing an AIR application using Actionscript 3.0. The application loads two native windows on two separate monitors. Monitor 1 will play a video, and monitor 2 will synchronously play an overlayed version of the same video (i.e. infrared). Presently, I am hung up with figuring out the best way to load the videos. I'm thinking about using arrays, but am wondering, if I do, how can I link arrays so to link a primary scene to its secondary overlay videos.
Here is my code so far. The first part of it creates the 2 native full-screen windows, and then you'll see where I started coding the arrays at the bottom:
package
{
import flash.display.NativeWindow;
import flash.display.NativeWindowInitOptions;
import flash.display.NativeWindowSystemChrome;
import flash.display.Screen;
import flash.display.Sprite;
import flash.display.StageAlign;
import flash.display.StageDisplayState;
import flash.display.StageScaleMode;
public class InTheAirNet_MultiviewPlayer extends Sprite {
public var secondWindow:NativeWindow;
public function InTheAirNet_MultiviewPlayer() {
// Ouput screen sizes and positions (for debugging)
for each (var s:Screen in Screen.screens) trace(s.bounds);
// Make primary (default) window's stage go fullscreen
stage.align = StageAlign.TOP_LEFT;
stage.scaleMode = StageScaleMode.NO_SCALE;
stage.displayState = StageDisplayState.FULL_SCREEN_INTERACTIVE;
stage.color = 0xC02A2A; // red
// Create fullscreen window on second monitor (check if available first)
if (Screen.screens[1]) {
// Second window
var nwio:NativeWindowInitOptions = new NativeWindowInitOptions();
nwio.systemChrome = NativeWindowSystemChrome.NONE;
secondWindow = new NativeWindow(nwio);
secondWindow.bounds = (Screen.screens[1] as Screen).bounds;
secondWindow.activate();
// Second window's stage
secondWindow.stage.align = StageAlign.TOP_LEFT;
secondWindow.stage.scaleMode = StageScaleMode.NO_SCALE;
secondWindow.stage.displayState = StageDisplayState.FULL_SCREEN_INTERACTIVE;
secondWindow.stage.color = 0x387D19; // green
}
//Create array of PRIMARY scenes
var primary:Array = ["scene1.f4v", "scene2.f4v"];
//Create array of SECONDARY scenes for scene1
var secondary1:Array = ["scene1A.f4v", "scene1B.f4v"];
//Create array of SECONDARY scenes for scene2
var secondary2:Array = ["scene2A.f4v", "scene2B.f4v"];
}
}
}
EDIT: Users will cycle through overlays using LEFT and RIGHT on the keyboard, and will cycle through the scenes using UP and DOWN.
Use a generic Object for each video, store each in an array, and then store the secondary videos in an array within each object:
var videos:Array = [
{
primary:'video1.mp4',
secondary:[
'overlay101.mp4',
'overlay102.mp4'
]
},
{
primary:'video2.mp4',
secondary:[
'overlay201.mp4',
'overlay202.mp4'
]
}
{
primary:'video3.mp4',
secondary:[
'overlay301.mp4',
'overlay302.mp4'
]
}
]
Then when you to play a video, you can loop through the videos array. Each object has a primary video and its associated secondary videos. You could take it a step further, as well, and use the object to store metadata or store more Objects within the secondary array, rather than strings.
EDIT: Quick response to the first comment
In Object-Oriented Programming (OOP) languages, everything is an object. Every single class, every single function, every single loop, every single variable is an object. Classes generally extend a base class, which is called Object in AS3.
The Object is the most basic, most primitive type of object available — it is simply a list of name:value pairs, sometimes referred to as a dictionary. So you can do the following:
var obj:Object = {hello:'world'};
trace(obj.hello); // output 'world'
trace(obj['hello']); // output 'world'
trace(obj.hasOwnProperty('hello')); //output true (obj has a property named 'hello')
What I did was create one of these objects for each video and then save them to an array. So say you wanted to trace out primary and find out how many videos were in secondary, you would do this:
for (var i:int = 0; i < videos.length; i++) {
var obj:Object = videos[i];
trace('Primary: ' + obj.primary); // for video 1, output 'video1.mp4'
trace('Secondary Length: ' + obj.secondary.length); // for video1, output 2
}
That should show you how to access the values. An dictionary/Object is the correct way to associate data and the structure I supplied is very simple, but exactly what you need.
What we're talking about is the most basic fundamentals of OOP programming (which is what AS3 is) and using dot-syntax, used by many languages (everything from AS3 to JS to Java to C++ to Python) as a way of accessing objects stored within other objects. You really need to read up on the basics of OOP before moving forward. You seem to be missing the fundamentals, which is key to writing any application.
I can't perfectly understand what you're trying to do, but have you thought about trying a for each loop? It would look something like...
prim1();
function prim1():void
{
for each (var vid:Video in primary)
{
//code to play the primary video here
}
for each (var vid:Video in secondary1)
{
//code to play the secondary video here
}
}
Sorry if it's not what you wanted, but what I understood from the question is that you're trying to play the both primary and secondary videos at the same time so that they're in synch. Good luck with your program ^^

How to find the closest point in the DB using objectify-appengine

I'm using objectify-appengine in my app. In the DB I store latitude & longitude of places.
at some point I'd like to find the closest place (from the DB) to a specific point.
As far as i understood i can't perform regular SQL-like queries.
So my question is how can it be done in the best way?
You should take a look at GeoModel, which enables Geospatial Queries with Google App Engine.
Update:
Let's assume that you have in your Objectify annotated model class, a GeoPt property called coordinates.
You need to have in your project two libraries:
GeoLocation.java
Java GeoModel
In the code that you want to perform a geo query, you have the following:
import com.beoui.geocell.GeocellManager;
import com.beoui.geocell.model.BoundingBox;
import you.package.path.GeoLocation;
// other imports
// in your method
GeoLocation specificPointLocation = GeoLocation.fromDegrees(specificPoint.latitude, specificPoint.longitude);
GeoLocation[] bc = specificPointLocation.boundingCoordinates(radius);
// Transform this to a bounding box
BoundingBox bb = new BoundingBox((float) bc[0].getLatitudeInDegrees(),
(float) bc[1].getLongitudeInDegrees(),
(float) bc[1].getLatitudeInDegrees(),
(float) bc[0].getLongitudeInDegrees());
// Calculate the geocells list to be used in the queries (optimize
// list of cells that complete the given bounding box)
List<String> cells = GeocellManager.bestBboxSearchCells(bb, null);
// calculate geocells of your model class instance
List <String> modelCells = GeocellManager.generateGeoCell(myInstance.getCoordinate);
// matching
for (String c : cells) {
if (modelCells.contains(c)) {
// success, do sth with it
break;
}
}

Resources