I need to know, how to trigger or open J2ME application when GPS coordinates changes
I don't think that starting an application based on GPS co-ordinate chnages is possible generically on J2ME. Why not have your application running all the time, monitoring the GPS co-ordinates and then take action when you detect a significant change?
Related
I posted something much like this on openhab and home assistant forum too, I will decide what to do based on what I hear...
I am trying to produce an open source Energy Recovery Ventilator, and software is not my forte.
I frankly find the sheer variety and quantity of buzzwords and subsystems in the home automation sphere difficult to navigate. I am unclear on why exactly things have to be so complicated... anyway.
I am using a raspberry pi pico running micropython. Do you think it would be practical to make it appear as basically a fan with several different modes to a matter hub? Maybe report back some info so the user can see some status updates etc. ?
What I want is basically to allow it to be controlled by a hub, which may be running on a phone or someone’s PC, so the hub’s user interface etc. Can be used to make the device turn off an on, up and down on a schedule, it can be connected to other devices like a co2 detector, smart switch, etc.
I need, sooner or later, possibly with the help of module(s) running on the pico to cache data (like schedule data) get the time or whatever, a dictionary which I will use for the rest of the system to interface to. The main loop consults the dictionary to determine behaviour at any given moment. The hub checks what time of day it is, etc. And sends that info along.
Is this sort of thing doable?
I tried to look into making the thing Alexa compatible and ye gads it would take me months to get that stuff working. They make everything so complicated
Found some stuff for esp32 devices like esphome, but it is not practical to use as a module in a larger system. MQTT looks like it could play an important role, but doesn't quite get me there and for some reason Alexa, Google home etc still cant really talk to mqtt devices very well, esp. including device setup etc. Basically, envision a little hardware device that just serves up some fields and takes back some fields, then appears as a device to Google home's app etc. I need that, but a software module that runs on a pico. Is it practical to roll this or is it going to be an ungainly undertaking?
we have the several component in the application which communicate via service bus queue and topics. We wanted to build the analytics on top of it using the angular JS. We wanted to show the queue length and average time of latency at every min and hour depending upon the selection. kindly let me know how we can get this information ... from angula we need to call web api which calculates the data and show it in liner charts or we need to use the stream analytics Below is the sample screen where x-axis will be time and Y- axis will be total number of count of messages
Check out ASB's Metrics REST API: https://learn.microsoft.com/en-us/rest/api/servicebus/Service-Bus-Entity-Metrics-REST-APIs
Though I am not sure it will provide everything you want out of the box. Specifically latency info is not there - have a feeling you'd need to collect and store it yourself.
Also take a look at 3rd party ASB monitoring products e.g. https://www.manageengine.com/products/applications_manager/azure-service-bus-monitoring.html - though I personally have not used them.
We want to use wpf application on tablet and looking for difference battery usage impacts between win app and wpf application?
Is there any comparision battery usage or document?
I doubt there is any type of documentation on what you want, but as suggested above, running your own tests shouldn't be too hard. I don't recall the APIs, but on any mobile device, there are going to be battery state objects you can access giving, at the very least, remaining battery energy. Write two test apps, each using the different paradigms. Run each, one at a time and for a long duration. Check on the energy usage at the beginning and end.
This is late for an answer but one aspect to remember about battery consumption is the use of the radios (Bluetooth and WiFi).
For tablet apps try to manage your app by stepping back and analyzing what data you'll need from the database and try to get the data in one shot so the OS can turn off the radio. If you make an sql call each time the user presses a button then the radio is on more and drains the battery. The OS might also leave the radio on "a little longer" in case you make another query.
For the rest of the UI of the app, you're safe to count on an 8 hr shift and then they dock it for recharge.
You can watch for the battery notifications as well so you can save the info in the app before the OS shuts you down.
Other than that, each app is unique and you'll need to run these tests during your QA cycle.
Is there any App/ Method / Process that i can use to get the battery consumption of a single app in blackberry? I am using 9300. My application uses GPRS and sends data over internet.
Till now i have been using a thread which tells me the difference between battery levels after an hour of a phone using my app and a phone not using my app.
Please suggest a better way?
Unfortunately BlackBery is not really open, clear and documented platform.
The best numbers that I got from conference in Amsterdam Info
There's no reliable way to tell how much battery your app has used. As you are already doing, you can retrieve the levels before and after calling DeviceInfo.getBatteryLevel, but the measured difference includes the battery used by other apps too.
I need to make a WPF application that has two windows for it's UI that will be used simultaneously by two separate users.
It needs to run on a single PC with dual monitors so that each UI screen displays on its own monitor. The app is for an industrial controls
interface for a machine we are building.
Machine description: The system is a test stand for a pump manufacturer. They would like to have two operators manning the station.
So it needs to be able to test two pumps at the same time but not synchronously (each operator my start their test at different times).
The system will test for leaks, vibration, flow, pressure and motor current. There are hundreds of different models all with
different test parameters as well as different test procedures. It is desired to have a single PC and a single PLC as the control hardware.
The PC will have dual touchscreen monitors (one for each operator), two bar code scanners (one for each operator) and two Zebra label printers
(one for each operator). The PC will interface with a Allen Bradley Compact Logix PLC via EtherNet I/P. The PLC will be programmed to control
all of the actuators and sensors on the machine. The PC will command the PLC to execute the various test sequences after it has written the
appropriate parameters to the PLC. The PLC will collect data during a test sequence, and the PC app will retrieve it and write it to persistent
storage.
Application description: The application will use an SQL Express database to store all the pump model's testing parameters as well as
the data collected during the test for each pump. The app will provide provide twin UI's that have identical functionality but are capable of
operating independently of each other. The app will have a UI screen for entering and editing the parameters for all the different pump
models, another screen to view data collected for a given pump and the main screen that will display the current pump under test info, such as
the parameters that are in use, the test progress and live transducer data. A usage scenario is as follows: The operator receives a batch of pumps
with it comes a work order sheet, he/she scans the bar code on the work order the app decodes the scan and extracts the model number, it then
retrieves the test parameters from the database and displays the info on screen, after operator confirmation it writes the parameters and test
sequence to the PLC. The operator loads the pump into the test chamber and closes a safety door. A "Begin Test" button is presented to the operator
after the PLC confirms pump present and safety doors closed. The operator presses the "Begin Test" button and the PC and PLC 'talk to each other
to perform the test sequence while the PC updates the UI to keep the operator informed of the progress and results for each step of the sequence.
When testing is complete the PC generates a GUID for the pump and stores the test data linked to the GUID in the database and prints a bar code
label encoded with the GUID and a pass/fail status code. The safety door unlocks The operator at the second station is performing the same tasks
but with a different work order, which can be a different model pump so the testing on the other station is completely independent of each other.
My question is this: Is it possible to have a single WPF app instantiate two separate UI threads
on separate monitors so that both UI windows appear to have focus simultaneously. And if so how do you do it.
A couple of other points to deal with are: each monitor will be a touch screen so two separate mouse inputs need to be handled and
each user will have a bar code scanner so two USB or serial scanners will need to be monitored for input.
You cannot have two windows with window focus at the same time, regardless of how many threads are in use. The best solution is to just create a standard WPF app with one giant window - this will allow you to do what you want.
sounds like you'd be better off with 2 machines, and one instance running on each.
others have already brought up the focus issues, but you also have double the sensors/etc to manage too.
you could either spend $$ on writing a super complicated app that violates most of the rules about one set of inputs+focus, or spend taht same $$ getting another machine to run the app?
As the other answers stated, only one window can have focus at a time.
Input from either user will reach last focused window.
Consider creating one server app and two remote UI apps that communicate with server.
That way, you have one app running logic, but two remote-apps feeding it input, from to separate machines.
(One of the input machine can also be used as the server's machine.)
You could also have two desktop apps that communicate directly with each other (on separate machines)
with no server app, but that would be a little trickier to implement.
I don't know for sure if you can have two focuses (foci?) at once in a WPF app, but it sounds like it could get messy quickly. It seems to me that a much cleaner solution would be to run two separate instances of the application.
Windows will only send touch input to one window at a time. Nothing you can do about that. The 'workaround' would be to handle all of the input from within one window, do some hit testing, and then react accordingly.