Context Silverlight/WPF, C#, .NET 4,
I have a 4 level deep tree of thumbnails I need to enumerate and display in some meaningful way.
For a Synchronous execution (sequential world) we could think like the following:
Channels = Channels_Build("CHANNELS.XML");
foreach Ch in Channels
{
Cats = Cats_Build( Ch.URL ) ;
foreach Cat in Cats
{
PLs = PLs_Build( Cat.URL ) ;
foreach PL in PLs
{
Medias = Medias_Build( PL.URL ) ;
foreach Media in Medias
display Media image
}
}
}
.
However I have an Async loading model for XML, images, ..., so I am thinking something like this:
Channels_Build("CHANNELS.XML");
Channels_Loaded()
{ // Channels build from some returned XML
foreach Ch in Channels
Cats_Build( Ch.URL, ??? ) ; //async calls
}
Cats_Loaded()
{ // Cats build from some returned XML
foreach Cat in Cats
PLs_Build( Cat.URL ... ) ;
}
PLs_Loaded()
{ // PLs build from some returned XML
foreach PL in PLs
MediaList_Build( PL.URL ... ) ;
}
MediaList_Loaded()
{ // MediaList build from some returned XML
foreach media in MediaList
display Media image
}
Each of Channels_Build, Cats_Build, PLs_Build, MediaList_Build make an Async call and thus have an associated callback xxx_Loaded()
Each Channel has 1 or more Categories.
Each Category has 1 or more PlayLists.
Each PlayList has 1 or more Media
Thus, I have a 4 levels deep hierarchical structure
You can assume Channels, Cats, PLs and MediaList share a common base class.
Should I fold this 4x logic into a single recursive build method? How? I would have to make the build process pass some info (parent node) to its corresponding callback (I looked up IAsyncResult.AsyncState)
My brain is locked up and I can't think of what's needed here? recursion?, passing info to Async calls, specific pattern?, ...
Should I build the tree data in memory, into one structure, first? or would that be useful only if I decide to use a TreeView control? What if I decide to display the info using a repeating template into a listbox for example. The template would display:
Channel-Image+Name
Category-Image+Name
PlayList-Image+Name
MediaList images...
Yes, the non-leaf nodes would repeat visually down the list. That's fine as it might provide the map I am looking for.
So the puzzling question remains:
How do I go about enumerating and displaying all the nodes in this Async model?
Thank you.
I would look at Async CTP refresh that simplifies this stuff a lot
Related
So the problem I am facing is that when the viewModel data updates it doesn't seems to update the state of my bytearrays: ByteArray by (mutableStateOf) and mutableListOf()
When I change pages/come back to the page it does update them again. How can get the view to update for things like lists and bytearrays
Is mutableStateOf the wrong way to update bytearrays and lists? I couldn't really find anything useful.
Example of byte array that doesn't work (using this code with a Float by mutableStateOf works!).
How I retrieve the data in the #Composable:
val Data = BluetoothMonitoring.shared.Data /*from a viewModel class, doesn't update when Data / bytearray /list changes, only when switching pages in app.*/
Class:
class BluetoothMonitoring : ViewModel(){
companion object {
val shared = BluetoothMonitoring()
}
var Data : ByteArray by mutableStateOf( ByteArray(11) { 0x00 })
}
Any help is appreciated and thanks in advance!
You seem to come from IOS/Swift where using Shared objects is a common pattern
In Android, you're not supposed to create a viewmodel instance by hand, you are supposed to use ViewModelFactory, or compose viewModel() function. The ViewModel object basically preserves some data for you across activity recompositions, i.e. when activity is paused, resumed etc
The general good way of structuring your app is using UDF (Unidirectional Data Flow) where, data flows upwards towards the view/composable and events flow downwards towards the viewmodel, data layer etc. We use something called as Flow which is a reactive data stream, which updates the listener whenever some value changes
Keeping in mind these two points, I've created a very brief way of how you could restructure your code, so that it almost always works. Please adapt accordingly to your logic
class MyViewModel: ViewModel(){
// Declare a flow in the viewModel
var myData = MutableStateFlow(0)
fun modifyMyData(newData: Int){
myData = newData
}
}
And in your composable view layer
#Composable
fun YourComposable(){
val myViewModel = viewModel()
val myUiState by myViewModel.myData.collectAsState()
// Now use your value, and change it, it will be changed accordingly and updated everywhere
}
I also recommend reading this codelab
How I handle byte arrays over Bluetooth with kotlin and Android.
I'm talking sic (communication) to arduino over bluetooth in the app I'm making. Kotlin likes Byte and Arduino likes UBYTE so I do all of those translations in the app because Kotlin threads are inline code easy and the phone has more power for such things.
Add toByte() to everthing that is outbound. <- (actual solution)
outS[8] = Color.alpha(word.color).toByte()
outS[9] = Color.red(word.color).toByte()
outS[10] = Color.green(word.color).toByte()
outS[11] = Color.blue(word.color).toByte()
outS[12] = 0.toByte()
outS[13] = 232.toByte()
outS[14] = 34.toByte()
outS[15] = 182.toByte()
//outS[16] = newline working without a newLine!
// outS[16] = newline
val os = getMyOutputStream()
if (os != null) {
os.write(outS)
}
For inbound data...I have to change everything to UByte. <- (actual solution)
val w: Word = wordViewModel.getWord(i)
if (w._id != 0) {
w.rechecked = (byteArray[7].toInt() != 0)
w.recolor = Color.argb(
byteArray[8].toUByte().toInt(),
byteArray[9].toUByte().toInt(),
byteArray[10].toUByte().toInt(),
byteArray[11].toUByte().toInt()
)
//finally update the word
wordViewModel.update(w)
}
My Youtube channel is about model trains not coding there some android and electronics in there. And a video of bluetooth arduino android is coming soon like tonight or tomorrow soon. I've been working on the app for just over a month using it to learn Kotlin. The project is actually working but I have to do the bluetooth connection manually the app actually controls individual neopixels for my model train layout.
I've trained a model(object detection) using Azure Custom Vision, and export the model as ONNX,
then import the model to my WPF(.net core) project.
I use ML.net to get prediction from my model, And I found the result has HUGE different compared with the prediction I saw on Custom Vision.
I've tried different order of extraction (ABGR, ARGB...etc), but the result is very disappointed, can any one give me some advice as there are not so much document online about Using Custom Vision's ONNX model with WPF to do object detection.
Here's some snippet:
// Model creation and pipeline definition for images needs to run just once, so calling it from the constructor:
var pipeline = mlContext.Transforms
.ResizeImages(
resizing: ImageResizingEstimator.ResizingKind.Fill,
outputColumnName: MLObjectDetectionSettings.InputTensorName,
imageWidth: MLObjectDetectionSettings.ImageWidth,
imageHeight: MLObjectDetectionSettings.ImageHeight,
inputColumnName: nameof(MLObjectDetectionInputData.Image))
.Append(mlContext.Transforms.ExtractPixels(
colorsToExtract: ImagePixelExtractingEstimator.ColorBits.Rgb,
orderOfExtraction: ImagePixelExtractingEstimator.ColorsOrder.ABGR,
outputColumnName: MLObjectDetectionSettings.InputTensorName))
.Append(mlContext.Transforms.ApplyOnnxModel(modelFile: modelPath, outputColumnName: MLObjectDetectionSettings.OutputTensorName, inputColumnName: MLObjectDetectionSettings.InputTensorName));
//Create empty DataView. We just need the schema to call fit()
var emptyData = new List<MLObjectDetectionInputData>();
var dataView = mlContext.Data.LoadFromEnumerable(emptyData);
//Generate a model.
var model = pipeline.Fit(dataView);
Then I use the model to create context.
//Create prediction engine.
var predictionEngine = _mlObjectDetectionContext.Model.CreatePredictionEngine<MLObjectDetectionInputData, MLObjectDetectionPrediction>(_mlObjectDetectionModel);
//Load tag labels.
var labels = File.ReadAllLines(LABELS_OBJECT_DETECTION_FILE_PATH);
//Create input data.
var imageInput = new MLObjectDetectionInputData { Image = this.originalImage };
//Predict.
var prediction = predictionEngine.Predict(imageInput);
Can you check on the image input (imageInput) is resized with the same size as in the model requirements when you prepare the pipeline for both Resize parameters:
imageWidth: MLObjectDetectionSettings.ImageWidth,
imageHeight: MLObjectDetectionSettings.ImageHeight.
Also for the ExtractPixels parameters especially on the ColorBits and ColorsOrder should follow the model requirements.
Hope this help
Arif
Maybe because the aspect ratio is not preserved during the resize.
Try with an image with the size of:
MLObjectDetectionSettings.ImageWidth * MLObjectDetectionSettings.ImageHeight
And you will see much better results.
I think Azure does preliminary processing on the image, maybe Padding (also during training?), or Cropping.
Maybe during the processing it also uses a moving window(the size that the model expects) and then do some aggregation
Hi I am a newbie in CodenameOne am now creating a eCommerce app using CodenameOne resource editor ,I have used multilist for displaying products . I have checked the checkbox feature wherein I got the checkbox for all my items .My question here is by code in statemachine how do I get the name of product checked by the user and how to get whether the checkbox is checked or not .I tried implementing findMultiList().getSelected() but it returns always true if I check r uncheck.
Please help me also it would be great if you could tell me how to integrate google drive in my project because I need to pull data from excel sheet and populate it in the list.
You need to get the model and traverse the elements within it:
ListModel<Map<String, Object>> model = (ListModel<Map<String, Object>>)findMultiList(c).getModel();
ArrayList<Map<String, Object>> items = new ArrayList<>();
for(int iter = 0 ; iter < model.getSize() ; iter++) {
Map<String, Object> current = model.getItemAt(iter);
String checked = (String)current.get("emblem");
if(checked != null && "true".equals(checked)) {
items.add(current);
}
}
I haven't tried this code but it should work. Notice that the name "emblem" is the default name used for the MultiButton/MultiList but you can change it to be anything.
You can place a break point on the for loop and inspect the map elements as you traverse them to see how this works.
Codename One doesn't have dedicated support for google drive api yet...
However, it does support Firebase (noSQL, so no table type data)
THis means you'll have to work with variable pairs.
There are resources for table databases, though :
https://www.codenameone.com/javadoc/com/codename1/db/Database.html
check out these libraries
https://github.com/shannah/cn1-data-access-lib
(Accessing data from web, sqlite support)
https://github.com/jegesh/cn1-object-cacher
(cache from web db)
These resources should help; good luck with your development :)
I am developing an AIR application using Actionscript 3.0. The application loads two native windows on two separate monitors. Monitor 1 will play a video, and monitor 2 will synchronously play an overlayed version of the same video (i.e. infrared). Presently, I am hung up with figuring out the best way to load the videos. I'm thinking about using arrays, but am wondering, if I do, how can I link arrays so to link a primary scene to its secondary overlay videos.
Here is my code so far. The first part of it creates the 2 native full-screen windows, and then you'll see where I started coding the arrays at the bottom:
package
{
import flash.display.NativeWindow;
import flash.display.NativeWindowInitOptions;
import flash.display.NativeWindowSystemChrome;
import flash.display.Screen;
import flash.display.Sprite;
import flash.display.StageAlign;
import flash.display.StageDisplayState;
import flash.display.StageScaleMode;
public class InTheAirNet_MultiviewPlayer extends Sprite {
public var secondWindow:NativeWindow;
public function InTheAirNet_MultiviewPlayer() {
// Ouput screen sizes and positions (for debugging)
for each (var s:Screen in Screen.screens) trace(s.bounds);
// Make primary (default) window's stage go fullscreen
stage.align = StageAlign.TOP_LEFT;
stage.scaleMode = StageScaleMode.NO_SCALE;
stage.displayState = StageDisplayState.FULL_SCREEN_INTERACTIVE;
stage.color = 0xC02A2A; // red
// Create fullscreen window on second monitor (check if available first)
if (Screen.screens[1]) {
// Second window
var nwio:NativeWindowInitOptions = new NativeWindowInitOptions();
nwio.systemChrome = NativeWindowSystemChrome.NONE;
secondWindow = new NativeWindow(nwio);
secondWindow.bounds = (Screen.screens[1] as Screen).bounds;
secondWindow.activate();
// Second window's stage
secondWindow.stage.align = StageAlign.TOP_LEFT;
secondWindow.stage.scaleMode = StageScaleMode.NO_SCALE;
secondWindow.stage.displayState = StageDisplayState.FULL_SCREEN_INTERACTIVE;
secondWindow.stage.color = 0x387D19; // green
}
//Create array of PRIMARY scenes
var primary:Array = ["scene1.f4v", "scene2.f4v"];
//Create array of SECONDARY scenes for scene1
var secondary1:Array = ["scene1A.f4v", "scene1B.f4v"];
//Create array of SECONDARY scenes for scene2
var secondary2:Array = ["scene2A.f4v", "scene2B.f4v"];
}
}
}
EDIT: Users will cycle through overlays using LEFT and RIGHT on the keyboard, and will cycle through the scenes using UP and DOWN.
Use a generic Object for each video, store each in an array, and then store the secondary videos in an array within each object:
var videos:Array = [
{
primary:'video1.mp4',
secondary:[
'overlay101.mp4',
'overlay102.mp4'
]
},
{
primary:'video2.mp4',
secondary:[
'overlay201.mp4',
'overlay202.mp4'
]
}
{
primary:'video3.mp4',
secondary:[
'overlay301.mp4',
'overlay302.mp4'
]
}
]
Then when you to play a video, you can loop through the videos array. Each object has a primary video and its associated secondary videos. You could take it a step further, as well, and use the object to store metadata or store more Objects within the secondary array, rather than strings.
EDIT: Quick response to the first comment
In Object-Oriented Programming (OOP) languages, everything is an object. Every single class, every single function, every single loop, every single variable is an object. Classes generally extend a base class, which is called Object in AS3.
The Object is the most basic, most primitive type of object available — it is simply a list of name:value pairs, sometimes referred to as a dictionary. So you can do the following:
var obj:Object = {hello:'world'};
trace(obj.hello); // output 'world'
trace(obj['hello']); // output 'world'
trace(obj.hasOwnProperty('hello')); //output true (obj has a property named 'hello')
What I did was create one of these objects for each video and then save them to an array. So say you wanted to trace out primary and find out how many videos were in secondary, you would do this:
for (var i:int = 0; i < videos.length; i++) {
var obj:Object = videos[i];
trace('Primary: ' + obj.primary); // for video 1, output 'video1.mp4'
trace('Secondary Length: ' + obj.secondary.length); // for video1, output 2
}
That should show you how to access the values. An dictionary/Object is the correct way to associate data and the structure I supplied is very simple, but exactly what you need.
What we're talking about is the most basic fundamentals of OOP programming (which is what AS3 is) and using dot-syntax, used by many languages (everything from AS3 to JS to Java to C++ to Python) as a way of accessing objects stored within other objects. You really need to read up on the basics of OOP before moving forward. You seem to be missing the fundamentals, which is key to writing any application.
I can't perfectly understand what you're trying to do, but have you thought about trying a for each loop? It would look something like...
prim1();
function prim1():void
{
for each (var vid:Video in primary)
{
//code to play the primary video here
}
for each (var vid:Video in secondary1)
{
//code to play the secondary video here
}
}
Sorry if it's not what you wanted, but what I understood from the question is that you're trying to play the both primary and secondary videos at the same time so that they're in synch. Good luck with your program ^^
In Qt Designer I'm creating multiple labels (for instance):
my_label1
my_label2
my_label3
...
my_label n
Then if I want to hide them I do this:
ui->my_label1->hide();
ui->my_label2->hide();
ui->my_label3->hide();
...
ui->my_labeln->hide();
However I would like to define the labels like
my_label[n]
So then I would be able to do this:
for(i=0;i<n;i++)
{
ui->my_label[n]->hide();
}
I read that I can define the widget like:
QLabel* my_label[5];
but is there any way to do the same from Qt Designer?
Thanks in advance!
Finally I decided to do direct assignment:
QLabel* my_label_array[5];
my_label_array[0] = ui->my_label1;
my_label_array[1] = ui->my_label2;
my_label_array[2] = ui->my_label3;
my_label_array[3] = ui->my_label4;
my_label_array[4] = ui->my_label5;
Then I can do for instance:
for(idx=0;idx<6;idx++) my_label_array[idx]->show();
for(idx=0;idx<6;idx++) my_label_array[idx]->hide();
for(idx=0;idx<6;idx++) my_label_array[idx]->setEnabled(1);
for(idx=0;idx<6;idx++) my_label_array[idx]->setDisabled(1);
etc...
Then I was able to perform iterations. I believe is not the cleanest way to do it but given my basic knowledge of Qt is ok for me.
Thank you very much for your answers and your support! This is a great site with great people.
Instead of creating an explicit array, you may be able to name your widgets using a particular scheme and then use QObject::findChildren() on the parent widget to get a list of the widgets you are after.
If you only want to hide widgets, you can put all the widgets you want to hide in an invisible QFrame (set frameShape to NoFrame) and hide them all by calling setVisible(false) on the QFrame. This may cause some unwanted side effects with layouts so you may have to tweak some size policy settings.
In case you are wanting to hide controls so that you can simulate a wizard type UI, you may want to check into QStackedWidget.
I have another dirty workaround for this:
in header file
// .hpp
class UiBlabla : public QWidget {
...
QLabel** labels;
};
in source file
// constructor
ui->setupUi(this);
labels = new QLabel*[10]{ui->label_0, ui->label_1, ui->label_2, ui->label_3,
ui->label_4, ui->label_5, ui->label_6,
ui->label_7, ui->label_8, ui->label_9};
I haven't seen anything in QtDesigner to do that, but there are a couple of relatively easy ways to get that behavior.
1) Simply store the my_labelx pointers (from QtDesigner) in an array (or better, a QVector):
QVector<QLabel*> my_labels;
my_labels.push_back(ui->my_label1);
my_labels.push_back(ui->my_label2);
Then you can iterate through the QVector.
for(int i=0; i < my_labels.size(); ++i) {
my_labels[i]-> hide();
}
// or with QFOREACH
foreach(QLabel* label, my_labels)
label->hide();
There is a little setup needed in terms of adding all the labels to the QVector, but on the plus side you only do that once.
2) Depending on the layout of your gui, you could have all your labels be children of a container object and iterate through the children