How can I access the iTunes DB in VB6? - database

I am trying to rewrite a closed-source program called Pod Player (written in VB6). To do anything, I need to be able to access the iPod's DB and load it into a series of ListBox controls. Things I need to access is: any playlists and what they contain, the iPod's name, track numbers, song titles, genres, artists, albums, path to the songs, their rating, UIN, file size, length and preferably album artwork. I also need to be able to change rating information if needed. So how can I interact (read/write) with the iPod's DB in VB6?
Or is it possible to write a DLL or OCX in another language that can do this and be called/used by the VB6 program?
I should also mention that Pod Player uses some or most of SharePod's code (before SP went .NET).
I found a database parser on Planet Source Code and tried out the demo form included as well as an implementation of it in my Pod Player rewrite, but (according to the demo form) it only reads 76% of the database before dropping out due to a playlist-related problem. I tested it on an iNano 4 and an iShuffle 4 and both are completely compatible with Pod Player. How can I get it (parser is in iPod.bas) to work correctly?

Have a look at this page: http://homepage.ntlworld.com/simon.mason20/ipod_tunes_spec.htm
It contains details of the iTunes database format

Related

SSRS Data Driven Subscription that customizes DeviceInfo settings like PageWidth, PageHeight?

I want to set up data driven subscriptions to mass output png files. The problem is that adding a new Extension for png in rsreportserver.config under Configurations/Extensions/Render only gives one fixed size of png file.
Report A really ought to output a 6in x 3in png file and report B ought to output a 6in x 4in png file.
Yes, I could create multiple entries in rsreportserver.config but this is confusing for end users as they show up on all users' export dropdowns needlessly.
I proposed doing the mass image generation with an external program that generates a custom url for each png (DeviceInfo settings can be part of the url) and uses WebClient.DownloadFile() in a loop, but my supervisor is currently really locked into the idea of data driven subscriptions for whatever reason.
Per #iamdave's suggestion, just setting the overarching page dimensions in report designer does give a suitably sized png file via data driven subscription without having to hardcode png dimensions in rsreportserver.config.
The reason I didn't initially notice this was the reports in question were graphs that were only ever used as subreports on an encompassing megareport and had never really been run as individual standalone reports. When used as a subreport, page dimensions never came into play, so they were left at the default 8.5x11.

Freebase: What data dump file contains the "imdb_id"?

I run IMDbAPI.com and have been using Bing's Search API for finding IMDb ID's from title searches. Bing is currently changing their API over to the Azure Marketplace (August 1st) and is no longer available for free. I started testing my API using Freebase to resolve these ID's and hit their 100k limit in the first 8 hours (my site currently gets about 3 million requests a day, but only 200-300k are title searches)
This is exactly why they offer the data dump files,
I downloaded most of the files in the Film folder but cannot find where they are storing the "/authority/imdb/title" imdb id namespace data.
https://www.googleapis.com/freebase/v1/mqlread?query={"type":"/film/film","name":"True%20Grit","imdb_id":null,"initial_release_date>=":"1969-01","limit":1}
This is how I'm currently accessing the ID.
Does anyone know which file contains this information? and how to link back to it from the film title/id?
That imdb_id property is backed by a key in the /authority/imdb/title namespace, so you're looking for the line:
/m/015gxt /type/object/key /authority/imdb/title tt0065126
in the file http://download.freebase.com/datadumps/latest/freebase-datadump-quadruples.tsv.bz2
That's a 4 GB file, so be prepared to wait a little while for the download. Note that everything is keyed by MID, so you'll need to figure that out first if you don't have it in your database.
The equivalent query using MQL instead of the data dumps is https://www.googleapis.com/freebase/v1/mqlread?query=%7B%22type%22%3a%22/film/film%22,%22name%22%3a%22True%20Grit%22,%22imdb_id%22%3anull,%22initial_release_date%3E=%22%3a%221969-01%22,%22mid%22:null,%22key%22:[{%22namespace%22:%22/authority/imdb/title%22}],%22limit%22:1%7D&indent=1
EDIT: p.s. I'm pretty sure the files in the Browse directory are going away, so I wouldn't depend on them even if you could find the info there.
The previous answer works fine, it's just that a snappier version of such a query could be:
query = [{
'type': '/film/film',
'name': 'prometheus',
'imdb_id': null,
...
}];
The rest of the MQL request isn't mentionned as it doesn't differ from the aforementioned. Hope that helps.

Difficulty with filename and filemime when using Migrate module

I am using the Drupal 7 Migrate module to create a series of nodes from JPG and EPS files. I can get them to import just fine. But I notice that when I am done importing them if I look at the nodes it creates, none of the attached filefield and thumbnail files contain filename information.
Upon inspecting the file_managed table I see that both the filename and filemime fields are empty for ONLY the files that I attached via the migrate module. This also creates an issue with downloading the files.
Now I think the problem has to do with the fact that I am using "file_link" instead of "file_copy" as the file operation I specify. The problem is I am importing around 2TB (thats Terabytes) of image files. We had to put in a special request with Rackspace just to get access to that much disk space on our server. So I can't go around copying from one directory to the next because of space issues. So "file_link" seems like the obvious choice.
Now you probably want to see how I am doing this exactly, so here is the code snippet:
$jpg_arguments = MigrateFileFieldHandler::arguments(NULL,
'file_link', FILE_EXISTS_RENAME, 'en', array('source_field' => 'jpg_name'),
array('source_field' => 'jpg_filename'), array('source_field' => 'jpg_filename'));
$this->addFieldMapping('field_image', 'jpg_uri')
->arguments($jpg_arguments);
As you can see I am specifying no base path (just like the beer.inc example file does). I have set file_link, the language, and the source fields for the description, title, and alt.
It is able to generate thumbnails from the JPGs. But still missing those columns of data in the db table. I traced through the functions the best I could but I don't see what is causing this. I tried running the uri in the table through the functions that generate the filename and the filemime and they output just fine. It is like something is removing just those segments of data.
Does anyone have any idea what this could be? I am using the Drupal 7 Migrate module version 2.2. It is running on Drupal 7.8.
Thanks,
Patrick
Ok, so I have found the answer to yet another question of mine. This is actually an issue with the migrate module itself. The issue is documented here. I will be repealing this bounty (as soon as I figure out how).

Torrent file protocol - custom field

I am wondering if there is any available field in the .torrent files that could be used for some custom functionality in someone's implementation of a torrent client? For example, one might want to encode an URL to the file owner's website, someone else - some custom message to be displayed when opening the files, etc. Is something like this feasible in the current implementation of .torrent files?
Yes. .torrent files are just bencoded dictionaries and can hold arbitrary key-value pairs.
The main consideration when adding a custom field is to determine whether it should go into the root of the .torrent or inside the info dictionary.
If it goes into the root, it will not affect the info hash (which is the unique identifier of the torrent), and it will also not be available when downloading magnet links.
If it goes into the info dictionary, it is sort of locked down to the info-hash, in the sense that the info-hash depends on it. It will be transferred as part of the metadata when downloading magnet links and it cannot be changed (without changing the info-hash and thus creating a separate swarm).
So, if it's something you want 3rd parties should be able to change after the torrent was created, it should go in the root, if you want it to be entered once when the torrent is created and never change, it should go in the info dict.

How do you build a multi-language web site?

A friend of mine is now building a web application with J2EE and Struts, and it's going to be prepared to display pages in several languages.
I was told that the best way to support a multi-language site is to use a properties file where you store all the strings of your pages, something like:
welcome.english = "Welcome!"
welcome.spanish = "¡Bienvenido!"
...
This solution is ok, but what happens if your site displays news or something like that (a blog)? I mean, content that is not static, that is updated often... The people that keep the site have to write every new entry in each supported language, and store each version of the entry in the database. The application loads only the entries in the user's chosen language.
How do you design the database to support this kind of implementation?
Thanks.
Warning: I'm not a java hacker, so YMMV but...
The problem with using a list of "properties" is that you need a lot of discipline. Every time you add a string that should be output to the user you will need to open your properties file, look to see if that string (or something roughly equivalent to it) is already in the file, and then go and add the new property if it isn't. On top of this, you'd have to hope the properties file was fairly human readable / editable if you wanted to give it to an external translation team to deal with.
The database based approach is useful for all your database based content. Ideally you want to make it easy to tie pieces of content together with their translations. It only really falls down for all the places you may want to output something that isn't out of a database (error messages etc.).
One fairly old technology which we find still works really well, is to use gettext. Gettext or some variant seems to be available for most languages and platforms. The basic premise is that you wrap your output in a special function call like so:
echo _("Please do not press this button again");
Then running the gettext tools over your source code will extract all the instances wrapped like that into a "po" file. This will contain entries such as:
#: myfolder/my.source:239
msgid "Please do not press this button again"
msgstr ""
And you can add your translation to the appropriate place:
#: myfolder/my.source:239
msgid "Please do not press this button again"
msgstr "s’il vous plaît ne pas appuyer sur le bouton ci-dessous à nouveau"
Subsequent runs of the gettext tools simply update your po files. You don't even need to extract the po file from your source. If you know you may want to translate your site down the line, then you can just use the format shown above (the underscored function) with all your output. If you don't provide a po file it will just return whatever you put in the quotes. gettext is designed to work with locales so the users locale is used to retrieve the appropriate po file. This makes it really easy to add new translations.
Gettext Pros
Doesn't get in your way while coding
Very easy to add translations
PO files can be compiled down for speed
There are libraries available for most languages / platforms
There are good cross platform tools for dealing with translations. It is actually possible to get your translation team set up with a tool such as poEdit to make it very easy for them to manage translation projects
Gettext Cons
Solves your site "furniture" needs, but you would usually still want a database based approach for your database driven content
For more info on gettext see this wikipedia page
They way I have designed the database before is to have an News-table containing basic info like NewsID (int), NewsPubDate (datetime), NewsAuthor (varchar/int) and then have a linked table NewsText that has these columns: NewsID(int), NewsText(text), NewsLanguageID(int). And at last you have a Language-table that has LanguageID(int) and LanguageName(varchar).
Then, when you want to show your users the news-page you do:
SELECT NewsText FROM News INNER JOIN NewsText ON News.NewsID = NewsText.NewsID
WHERE NewsText.NewsLanguageID = <<Session["UserLanguageID"]>>
That Session-bit is a local variable where you store the users language when they log in or enters the site for the first time.
Java web applications support internationalization using the java standard tag library.
You've really got 2 problems. Static content and dynamic content.
for static content you can use jstl. It uses java ResourceBundles to accomplish this. I managed to get a Databased backed bundle working with the help of this site.
The second problem is dynamic content.
To solve this problem you'll need to store the data so that you can retrieve different translations based on the user's Locale. (Locale includes Country and Language).
It's not trivial, but it is something you can do with a little planning up front.
#Auron
thats what we apply it to. Our apps are all PHP, but gettext has a long heritage.
Looks like there is a good Java implementation
Tag libraries are fine if you're using JSP, but you can also achieve I18N using a template-based technology such as FreeMarker.

Resources