PlaybackNearlyFinished and PlaybackFinished occur almost at the same time? - alexa

It turns out, that sometimes (not always btw but very frequently) PlaybackNearlyFinished and PlaybackFinished occur almost at the same time. What also confuses is that both of the events convey exactly the same offset that represents the very end of the stream:
When this happens the next stream scheduled in PlaybackNearlyFinished does not go - the playback just finishes.
Unless this is a bug in Alexa/Infrastructure, I can not figure out how to implement a playback for a playlist - there's just no way to reliably schedule the upcoming track...
Is there anything I could do in my code to make it work well?
I am using Echo Dot 2 gen., physically located in Europe, use java SDK, AWS Lambda, Dynamo DB.

Looks like it is figured out now - in order to queue the next stream properly one needs to come up with really unique stream tokens. This means that even the same file/url should be enqueued under a unique token.
In my example above, I used the index of the track in the playlist as an token. Once I solved it like below, everything started to work like a charm:
import org.apache.commons.lang3.RandomStringUtils;
public class TokenService {
public String createToken(int playbackPosition) {
String suffix = RandomStringUtils.randomAlphanumeric(16);
return String.valueOf(playbackPosition) + ":" + suffix;
}
public int tokenToPlaybackIndex(String token) {
String positionStr = token.split(":")[0];
return Integer.valueOf(positionStr);
}
}
Hope it helps somebody!

Related

Camel: how to aggregate files based on exchange in pattern

I have a class to run my route; The input comes from a queue (which is filled by a route that does a query and inserts the rows as messages on the queue)
These messages each contain a few headers:
- pdu_id, basically a prefetch on the filename.
- pad: the path the files reside in
What is to happen: I want the files in the path named by their "pdu_id".* in a tar; After that a REST call is to be done to remove the documents source.
I know a route has a from; but basically I need a route with a dynamic "from", and as below code example shows, queueing froms doesn't do the trick.
The question is what to use instead; I could not find a similar thing, but it can be I didn't use the right google search; in which case I'm deeply sorry.
public class ToDeleteTarAndDeleteRoute extends RouteBuilder {
#Override
public void configure() throws Exception
{
from("broker1:todelete.message_ids.queue")
.from("file:///?fileName=${in.header.pad}${in.header.pdu_id}.*")
.aggregate(new TarAggregationStrategy())
.constant(true)
.completionFromBatchConsumer()
.eagerCheckCompletion()
.to("file:///?fileName=${in.header.pad}${in.header.pdu_id}.tar")
.log("${header.pdu_id} tarred")
.setHeader(Exchange.HTTP_METHOD, constant("DELETE"))
.setHeader("Connection", constant("Close"))
.enrich()
.simple("http:127.0.0.1/restfuldb${header.pdu_id}?httpClient.authenticationPreemptive=true")
.log("${header.pdu_id} tarred and deleted.");
}
}
Yes. Poll enrich can help you in doing it. You should use it something like this:
from("broker1:todelete.message_ids.queue")
.aggregationStrategy(new TarAggregationStrategy())
.pollEnrich()
.simple("file:///?fileName=${in.header.pad}/${in.header.pdu_id}.*")
.unmarshal().string()
.to("file:///?fileName=${in.header.pad}/${in.header.pdu_id}.tar")
.log("${header.pdu_id} tarred")
.setHeader(Exchange.HTTP_METHOD, constant("DELETE"))
.setHeader("Connection", constant("Close"))
.enrich()
.simple("http:127.0.0.1/restfuldb${header.pdu_id}?httpClient.authenticationPreemptive=true")
.log("${header.pdu_id} tarred and deleted.");
Currently the solution to the problem consisted of a few changes based on what #daBigBug answered.
pollEnrich's simple expression uses antInclude rather than fileName;
aggregate is put after pollenrich; as each batch is the set of files, rather than the input from the queue. The input from the queue only provides meta information based on which actions are to be taken.
aggregationStrategy() is not possible in a RouteBuilder; I used aggregate() instead.
I removed the unmarshal(); I don't see why this would be needed; the files can contain binary content.
from("broker1:todelete.message_ids.queue")
.pollEnrich()
.simple("file:${in.header.pad}?antInclude=${in.header.pdu_id}.*")
.aggregate(new TarAggregationStrategy())
.constant(true)
.completionFromBatchConsumer()
.eagerCheckCompletion()
.log("tarring to: ${header.pad}${header.pdu_id}.tar")
.setHeader(Exchange.FILE_NAME, simple("${header.pdu_id}.tar"))
.setHeader(Exchange.FILE_PATH, simple("${header.pad}"))
.to("file://ignored")
...(and the rest of the operations);
I now see the files are getting picked up and even placed in a tar; however, the filename of the tar is unexpected as is the location (it's placed in ./ignored); Also in the rest of the operation, it appears the exchange headers are lost.
If anyone can help figure out how to preserve the headers in a safe way... I'm much obliged. Should I use a new question for that, or should I rephrase the question.

'Concurrent saves are not allowed' with manager.enableSaveQueuing(true)

Why breeze keeps throwing 'Concurrent saves are not allowed' with manager.enableSaveQueuing(true) option enabled
Simply because you're trying to issue multiple saves at the same time.
Breeze's default save option is queuing data for saving.
In your case, you can overwrite the option for allowing concurrent saves as follows:
var so = new breeze.SaveOptions({allowConcurrentSaves: true})
return manager.saveChanges(null,so)
.then(saveSucceeded) //
.fail(saveFailed);
EDIT
Since you are using the "saveQueuing" plugin, Ignore my first Answer since it only applies to concurrent saves.
I'm not aware of how your code works, but you might take some considerations in case of save queuing:
You can only issue manager.saveChanges() once inside your code.
At the server side, override the BeforeSaveEntity() method for the sake of the mutual-exclusion lock statement in your new savechanges() method, your code may look something like this:
public void SaveChanges(SaveWorkState saveWorkState)
{
lock (__lock) // this will block any try to issue concurrent saves on the same row
{
// Saving Operations goes here
}
}
You might want to look at it in the NoDB Sample.

Using/Searching AsyncDataProvider with Objectify / Google App Engine

I currently have an application which uses the activities/places and an AsyncDataProvider.
Right now, everytime the activity loads up - it uses the request factory to retrieve the data (currently not a lot but will get very large coming up here soon) and passes it to the View to update the DataGrid. Before it is updated it is filtered based on a search box.
Right now - I have implemented updating the DataGrid as follows: (this code isn't the prettiest)
private void updateData() {
final AsyncDataProvider<EquipmentTypeProxy> provider = new AsyncDataProvider<EquipmentTypeProxy>() {
#Override
protected void onRangeChanged(HasData<EquipmentTypeProxy> display) {
int start = display.getVisibleRange().getStart();
int end = start + display.getVisibleRange().getLength();
final List<EquipmentTypeProxy> subList = getSubList(start, end);
end = (end >= subList.size()) ? subList.size() : end;
if (subList.size() < DATAGRID_PAGE_SIZE) {
updateRowCount(subList.size(), true);
} else {
updateRowCount(data.size(), true);
}
updateRowData(start, subList);
}
private List<EquipmentTypeProxy> getSubList(int start, int end) {
final List<EquipmentTypeProxy> filteredEquipment;
if (searchString == null || searchString.equals("")) {
if (data.isEmpty() == false && data.size() > (end - start)) {
filteredEquipment = data.subList(start, end);
} else {
filteredEquipment = data;
}
} else {
filteredEquipment = new ArrayList<EquipmentTypeProxy>();
for (final EquipmentTypeProxy equipmentType : data) {
if (equipmentType.getName().contains(searchString)) {
filteredEquipment.add(equipmentType);
}
}
}
return filteredEquipment;
}
};
provider.addDataDisplay(dataGrid);
}
Ultimately - what I would like to do is only load up the necessary data at first (the default page size in this application is 25).
Unfortunately, to my current understanding, with Google App Engine there is no order to any of the Id's (one entry has an ID of 3 the next has an entry of 4203).
What I'm wondering, what is the best way to go about retrieving a subset of data from Google App Engine when using Objectify?
I was looking into using Offset and limit but another stack overflow post (http://stackoverflow.com/questions/9726232/achieve-good-paging-using-objectify) basically said this is inefficient.
The best information I've found is the following link (http://stackoverflow.com/questions/7027202/objectify-paging-with-cursors). The answer here says to use Cursors but also says this is inefficient. I'm also using Request Factory so I will have to store the Cursor in my user Session (if that is incorrect please let me know).
Currently since there isn't likely to be a lot of data (maybe 200 rows total for the next few months) I am just pulling back the entire set to the client as a temporary hack - I know this is the worst way to do it but would like to get input to the best way to do it before wasting my time implementing another hack solution. I am worried currently as it seems every single post i've read on doing this makes it seem like there's not really a solid way to do this.
What i am also thinking about doing - currently my searching / page loading is lightning fast because all the data is already on the client side. I use a KeyUpEvent handler in the search box to filter the data - i don't think there is any way to keep this speed by making a call to the server - is there any accepted solution to this problem?
Thank you very much
Go with Cursors. They are as efficient as it gets - cursor stores the point where last query ended and continues from there. The answer you linked actually does not discuss efficiency of cursors vs offset. (there is a comment that is wrong)
You can use limit with Cursors - it does not affect efficiency.
Also, Cursors can be serialized via cursor.toWebSafeString() and sent to client via RPC. This way you do not need to save them in session. Actually you can also use them as fragment identifier (aka history token in GWT parlance) - this way a certain "page" of your result set can be bookmarked.
(Offset is "inefficient" because it actually loads, and charges you, for all entities upto offset+limit, bit it only returns limit entities)
OTOH, if you already know the query parameters when the page is loaded, then just do the query at page generation time, instead invoking it via RPC. Also, if you have a small set of data (<1000) you could just preload all entity IDs s part of page html.

Windows Phone 7 App Quits when I attempt to deserialize JSON

I'm developing my first windows phone 7 app, and I've hit a snag. basically it's just reading a json string of events and binding that to a list (using the list app starting point)
public void Load()
{
// form the URI
UriBuilder uri = new UriBuilder("http://mysite.com/events.json");
WebClient proxy = new WebClient();
proxy.OpenReadCompleted += new OpenReadCompletedEventHandler(OnReadCompleted);
proxy.OpenReadAsync(uri.Uri);
}
void OnReadCompleted(object sender, OpenReadCompletedEventArgs e)
{
if (e.Error == null)
{
var serializer = new DataContractJsonSerializer(typeof(EventList));
var events = (EventList)serializer.ReadObject(e.Result);
foreach (var ev in events)
{
Items.Add(ev);
}
}
}
public ObservableCollection<EventDetails> Items { get; private set; }
EventDetails is my class that wraps the json string. this class has to be correct because it is an exact copy of the class used by that website internally from which the json is generated...
I get the json string correctly from the webclient call (I read the memorystream and the json is indeed there) but as soon as I attempt to deserialize the string, the application exits and the debugger stops.
I get no error message or any indication that anything happen, it just stops. This happens if I type the deserialize method into the watch window as well...
I have already tried using JSON.net in fact I thought maybe it was a problem with JSON.net so I converted it to use the native deserializer in the .net framework but the error is the same either way.
why would the application just quit? shouldn't it give me SOME kind of error message?
what could I be doing wrong?
many thanks!
Firstly, the fact that you have some string there that looks like JSON does not mean that you have a valid JSON. Try converting a simple one.
If your JSON is valid, it might be that your JSON implementation does not know how to convert a list to EventList. Give it a try with ArrayList instead and let me know.
The application closes because an unhandled exception happens. If check the App.xaml.cs file you will find the code that closes your app. What you need to do is try catch your deserialization process and handle it locally. So most likely you have some JSON the DataContractJsonSerializer does not like. I have been having issue with it deserializing WCF JSON and have had to go other routes.
You may want to check to ensure your JSON is valid, just because your website likes it does not mean it is actually valid, the code on your site may be helping to correct the issue. Drop a copy of your JSON object (the string) in http://jsonlint.com/ to see if it is valid or not. Crokford (the guy who created JSON) wrote this site to validate JSON, so I would rely on it more than your site ;) This little site has really helped me out of some issues over the past year.
I ran into this same kind of problem when trying to migrate some existing WM code to run on WP7. I believe that the WP7 app crashes whenever it loads an assembly (or class?) that references something that's not available in WP7. In my case, I think it was Assembly.Load or something in the System.IO namespace, related to file access via paths.
While your case might be something completely different, the symptoms were exactly the same.
The only thing I can recommend is to go through the JSON library and see if it's referencing base classes that are not allowed in WP7. Note that it doesn't even have to hit the line of code that's causing the issue - it'll crash as soon as it tries to hit the class that contains the bad reference.
If you can step into the JSON library, you can get a better idea of which class is causing the problem, because as soon as the code references it, the whole app will crash and the debugger will stop.

Provide a database packaged with the .APK file or host it separately on a website?

Here is some background about my app:
I am developing an Android app that will display a random quote or verse to the user. For this I am using an SQLite database. The size of the DB would be approximately 5K to 10K records, possibly increasing to upto 1M in later versions as new quotes and verses are added. Thus the user would need to update the DB as and when newer versions are of the app or DB are released.
After reading through some forums online, there seem to be two feasible ways I could provide the DB:
1. Bundle it along with the .APK file of the app, or
2. Upload it to my app's website from where users will have to download it
I want to know which method would be better (if there is yet another approach other than these, please do let me know).
After pondering this problem for some time, I have these thoughts regarding the above approaches:
Approach 1:
Users will obtain the DB along with the app, and won't have to download it separately. Installation would thereby be easier. But, users will have to reinstall the app every time there is a new version of the DB. Also, if the DB is large, it will make the installable too cumbersome.
Approach 2:
Users will have to download the full DB from the website (although I can provide a small, sample version of the DB via Approach 1). But, the installer will be simpler and smaller in size. Also, I would be able to provide future versions of the DB easily for those who might not want newer versions of the app.
Could you please tell me from a technical and an administrative standpoint which approach would be the better one and why?
If there is a third or fourth approach better than either of these, please let me know.
Thank you!
Andruid
I built a similar app for Android which gets periodic updates with data from a government agency. It's fairly easy to build an Android compatible db off the device using perl or similar and download it to the phone from a website; and this works rather well, plus the user gets current data whenever they download the app. It's also supposed to be possible to throw the data onto the sdcard if you want to avoid using primary data storage space, which is a bigger concern for my app which has a ~6Mb database.
In order to make Android happy with the DB, I believe you have to do the following (I build my DB using perl).
$st = $db->prepare( "CREATE TABLE \"android_metadata\" (\"locale\" TEXT DEFAULT 'en_US')");
$st->execute();
$st = $db->prepare( "INSERT INTO \"android_metadata\" VALUES ('en_US')");
$st->execute();
I have an update activity which checks weather updates are available and if so presents an "update now" screen. The download process looks like this and lives in a DatabaseHelperClass.
public void downloadUpdate(final Handler handler, final UpdateActivity updateActivity) {
URL url;
try {
close();
File f = new File(getDatabasePath());
if (f.exists()) {
f.delete();
}
getReadableDatabase();
close();
url = new URL("http://yourserver.com/" + currentDbVersion + ".sqlite");
URLConnection urlconn = url.openConnection();
final int contentLength = urlconn.getContentLength();
Log.i(TAG, String.format("Download size %d", contentLength));
handler.post(new Runnable() {
public void run() {
updateActivity.setProgressMax(contentLength);
}
});
InputStream is = urlconn.getInputStream();
// Open the empty db as the output stream
OutputStream os = new FileOutputStream(f);
// transfer bytes from the inputfile to the outputfile
byte[] buffer = new byte[1024 * 1000];
int written = 0;
int length = 0;
while (written < contentLength) {
length = is.read(buffer);
os.write(buffer, 0, length);
written += length;
final int currentprogress = written;
handler.post(new Runnable() {
public void run() {
Log.i(TAG, String.format("progress %d", currentprogress));
updateActivity.setCurrentProgress(currentprogress);
}
});
}
// Close the streams
os.flush();
os.close();
is.close();
Log.i(TAG, "Download complete");
openDatabase();
} catch (Exception e) {
Log.e(TAG, "bad things", e);
}
handler.post(new Runnable() {
public void run() {
updateActivity.refreshState(true);
}
});
}
Also note that I keep a version number in the filename of the db files, and a pointer to the current one in a text file on the server.
It sounds like your app and your db are tightly bound -- that is, the db is useless without the database and the database is useless without the app, so I'd say go ahead and put them both in the same .apk.
That being said, if you expect the db to change very slowly over time, but the app to change quicker, and you don't want your users to have to download the db with each new app revision, then you might want to unbundle them. To make this work, you can do one of two things:
Install them as separate applications, but make sure they share the same userID using the sharedUserId tag in the AndroidManifest.xml file.
Install them as separate applications, and create a ContentProvider for the database. This way other apps could make use of your database as well (if that is useful).
If you are going to store the db on your website then I would recommend that you just make rpc calls to your webserver and get data that way, so the device will never have to deal with a local database. Using a cache manager to avoid multiple lookups will help as well so pages will not have to lookup data each time a page reloads. Also if you need to update the data you do not have to send out a new app every time. Using HttpClient is pretty straight forward, if you need any examples please let me know

Resources