How healthy is the LucidDB project? - database

I am working on a project that would greatly benefit from a column store database on the backend. I was attracted to LucidDB since the feature set seems perfect, and I cannot commit to the cost of a commercial solution like Infobright or Vertica until the project has shown value.
The problem is, I am concerned about the health of the LucidDB project. The internal wiki hasn't been updated in more than a month, and the website is full of broken links. DynamoBI dying does not help the case.
Is there anyone who knows the state of the project, and how comfortable you'd be with production code relying on this database?

LucidDB is no longer supported by DynamoBI as they are closing the shop.
http://www.nicholasgoodman.com/bt/blog/2012/10/08/dynamobi-is-dead/
Dr.Bharatheesh Jaysimha

Related

How to create Salesforce incremental package.xml automatically?

Does anyone experiment in creating salesforce Package.xml automatically for continuous integration? If there any script or some idea please share.
You know incremental package.xml helps to deploy only the modified files rather than using complete package.xml that redeploy unmodified files as well which takes a lot of time.
Thanks in advance!
Tricky. And not really a programming-related problem, consider cross-posting this to https://salesforce.stackexchange.com/ or maybe even https://devops.stackexchange.com/
I don't think there's no clear answer, you'll have to experiment. Especially that you tagged "migration tool" (so old-school, battle-tested but lower priority Metadata API; seems that all focus is now on SFDX style of deployments). Do you use any version control (ideally Git) or do you hope to somehow compare source & target org, figure out the deltas and deploy only them?
Remember that often SF gets better at detecting "no changes" with every release (how old is your migration tool's jar file?). For example when I deploy my current project to an empty sandbox (exact copy of prod, no custom objects, code etc yet) the initial deploy takes ~7 minutes. But any subsequent deploy with same content or slight changes takes just 3-4. So try to calculate time lost in the grand scheme of things and decide what gains you want to see / how much time you want to spend on experimenting and tweaking the solution.
You could look into dedicated deployment solutions such as Gearset, Autorabit, Odaseva (I'm not affiliated with either and this list is not exhaustive). They often are capable of running a comparison for you.
There are several projects that try to compose package.xml based on Git diff(erence) between two commits. Of course you need to have a repo first and some regime:
https://github.com/cloudsandbox/sfdx-gen-pack saw presentation about it at Cloudforce London 2019
https://github.com/Accenture/sfpowerkit seems to have a "diff" command (disclaimer: I used to work for Accenture but not affiliated now, haven't worked on the tool, haven't used it personally)
https://cumulusci.readthedocs.io/en/latest/ this seems to be interesting and mature. Built by SF employees, not an official tool but used to CI deploy the non-profit packages they build (maybe you heard about Non Profit Starter Pack, especially if you ever considered enabling Person Accounts). I'm not sure if they do delta deployments as such but there seems to be a command that updates package.xml with files in repository so it's a start? https://cumulusci.readthedocs.io/en/latest/tutorial.html#part-4-running-tasks
I'm not saying CumulusCI will be a silver bullet but out of these 3 seems to be most actively maintained ;) But sounds like you'd have to get familiar with SFDX (if not whole thing then at least commands to convert the project back and forth between "source" (SFDX) structure and Metadata API structure
Answering my question by myself: I found git diff master feature/vat | force-dev-tool changeset create vat working!
Thanks to Roman answered in https://salesforce.stackexchange.com/questions/184332/is-there-a-pre-build-solution-for-generating-a-package-xml-from-a-git-repo

Optimal Database to be used for metro apps?

I am new to windows 8 metro apps development and i need your help in figuring out my doubt about the database support to metro applications ?
i am developing a application which requires reasonably huge amount of data to be stored in its database , it would require many tables and relation between them , with help of this i can fetch the data from database and show it to user , and perform the required operations on it and sync the data when internet connection is there.
Please let me know which is the optimal database for metro applications for me to use in my app.
As per my knowledge these are the present alternatives for database support for metro applications (i might be wrong , please correct me if i am wrong):-
1)IndexedDB 2)SQLite 3)WinRT File Based Database 4)Siaqodb 5)Devart LinqConnect
Thanks in advance.
Your answer might clear doubts of many developers like me about database support , Please feel free to answer it.
IMO you forgot about one more solution. This solution probably will be best for you. Use external DB and webservice/webapi to communicate with it.
In my project I'm using Azure SQL. I communicate my Metro app with it by webapi that is published in cloud.
http://www.infoq.com/news/2012/07/sqlite-metro-winmobile
http://wp.qmatteoq.com/using-sqlite-in-your-windows-8-metro-style-applications
http://timheuer.com/blog/archive/2012/05/20/using-sqlite-in-metro-style-app.aspx
SQLite on WinRT is probably your best bet...
Chad

migrator.net vs fluentmigrator vs migsharp

I am currently investigating possible options of a migration framework/tool. I like the idea of ruby migrations on which the above frameworks are based.
So I am asking for your experience, opinions and maybe a comparison between them. Are you using them in production?
thanks for responses. The goal of this question was to get a feeling about which tools is used most in the developer community but it seems that migrations are not a hot topic here.
Anyway, I have decided to go with MigSharp as the codebase seem to be pretty clean and it is quite easy to handle and had build in support for MS SQL CE. Second runner up would have been FluentMigrator where I was not able to produce a working example for compact edition.
Cheers
I use FluentMigrator in production, and am a longtime contributor to FM. I think your question is to general; be more specific. Also, FM has a google group which is fairly active if you want FM information.
FM is derived from migrator.net, as I recall. It uses a fluent-syntax, and supports multiple databases. We have taken some inspiration from rails migrations, but it's definitely not a port. Worth checking out.
One thing I've learned is not to put your migrations in the same assembly as you app code. Separate them into a migration assembly, and use that for migrating your databases.
Also, you should always work on multiple environments to avoid problems with migrations run straight against production. I always have at least a development and production environment, and most of the time there is a testing environment as well.
I use mig#.
It works well, but you will need to have some guidelines for usage - as migrations can get complicated.
We use sequence number on the end of our migrations rather than a date-time stamp. This is because we don't know when the date time stamp was set (when they begun the source code change-set; just before committing; some time inbetween) different developers could use different approaches.
Names such as Migration_0000034.cs give you plenty of space.
At this point, I would stick with migrator.net. I like the promise of FluentMigrator, but it seems to not have any better active development than migrator.net (see the issues and pull requests that have languished on their github site).
There is also no easy way to do an ExecuteScalar(). I'd add it, but I don't want to create my own fork, and I see no reason that a pull request would actually land in the master. (Execute.WithConnection is an Action so it will fire on demand rather than when I need it to fire)
So for me, I'm heading back to migrator.net.

heavy iTunes Connect scraping

I'm looking at different options to get the sales reports and other data out of the iTunes Connect website. Since Apple doesn't provide an API, all the solutions I found are based on scraping the page.
As I need the information for a product that we offer, I'm not that happy to give all the iTunes accounts to a 3rd party service. This is why I want to scrape it myself or use a product that runs on our servers.
My questions are:
does someone have experience how frequent apple is changing the web front-end?
has someone experience in maximum request from one server to the site? I'm afraid of being baned by apple.
anything else I have to have in mind that will cause serious trouble?
Just if someone is interested in the tools I looked at, here is a list:
Services:
http://www.appfigures.com (has API)
http://www.itunesapis.com
http://www.appannie.com/
http://www.heartbeatapp.com
Products:
http://www.appclix.com (has a enterprise licence that runs on your own server, includes API. Tends to me more a mobile analytics tool in general)
http://www.ideaswarm.com/products/appviz/ (Mac enduser app)
Open Source Tools:
http://code.google.com/p/appdailysales/
http://metacpan.org/pod/WWW::iTunesConnect
http://www.rogueamoeba.com/utm/2009/05/04/itunesconnectarchiver/
http://github.com/kasatani/iphone-stats
http://bfoz.net/projects/itc/
http://sourceforge.net/projects/itunesanalytics/
UPDATE:
I started using Kirby's python script (https://github.com/kirbyt/appdailysales) and it works very well.
does someone have experience how frequent apple is changing the web front-end?
I can't speak for all of iTunes Connect, only downloading daily sales reports. My script was rock solid and didn't require a single change between November 2009 and September 2010. This changed in September 2010 when Apple rolled out the new web site. This broke the old script, and a new one had to be written. Since rolling out the new web site, I make changes every few days to handle the tweaks from Apple. I'm hoping the tweaks will end soon.
Take a look at the download page for appdailysales.py. The dates will give you a general idea of how often I make changes to the script.
https://github.com/kirbyt/appdailysales
Again, this is only for daily sales reports. I'm not sure how frequently others areas of iTC change.
has someone experience in maximum request from one server to the site? I'm afraid of being baned by apple.
I've not experienced this, but my server runs the script only once a day. I frequently hit the iTC when working on the script, but not enough to cause a load on Apple's servers.
anything else I have to have in mind that will cause serious trouble?
I don't know what might get you in trouble with Apple, but one thing that does cause a serious headache is changes to the web site. While the new version of the web site makes screen scraping the site easier, it did involve writing a new script. Apple does not give you a heads up that they are changing something. You find out after the fact when something in your screen scraper breaks.
If you depend on the data daily, then you have to drop everything and make the necessary fixes. And there is nothing stopping Apple from rolling out another new site sometime in the future.
Hope that helps.
-KIRBY
I'm using AppSalesMobile on iPhone. It get's updated pretty quickly. Another script I use is salestrends.sh that just downloads the reports in a folder for easy import into databases etc.
If you're also interested in finding out, in which countries an app is featured, you can use my iTunesFeaturedCheck script.
Also check out this question with more links.
You might also try the Autoingestion tool from Apple. Documentation here.
appdailysales is the best tool out there that I have found.
I have modified it so that the script automatically puts the ITC data into a MySQL database instead of just saving the txt files. And as Kirby pointed out, I too only have it run once a day and everything appears to be working. Nothing has been blocked by Apple so far.
As for the script breaking, the one good thing is that Apple keeps daily sales reports for 14 days (last I checked). This means that if the script breaks, one has several days to fix the script and still get the daily sales reports.
Good luck.
Kevin

Incremental development with subsonic

I'm in the process of starting up a web site project. My plan is to roll out the site in a somewhat rudimentary form first and then add to the site functionality along the way.
I'm using Subsonic 3 for my DAL, and I'm expecting the database will go through multiple versions as the sites evolve. This means I'll need some kind of versioning and migration tools. I understand that Subsonic has built in migration possibilities, but I'm having difficulties grasping how to use these tools, in my scenario.
First there's the SimpleRepository model, where the Subsonic "automagically" handles the migrations as i develop my site. I can see how this works on my dev-machine, but I'm not sure how to handle deployments with this.
Would Subsonic run the necessary migrations on my live site as the appropriate methods are called?
Is there some way I can force all necessary migrations on a site while taking the site offline, when using the Simplerepository model? (Else I would expect random users to experience severe performance cuts, as the migration routines kick in)
Would I be better off using the ActiveRecord model, and then handling migrations with the Subsonic.Schema.Migrator? (I suspect so)
Do you know of any good resources explaining how to handle this situation with the migrator? (I read the doc, but I can't piece together how I would use this in practice)
Thanks for listening/replying.
Regards
Jesper Hauge
I would advise against ever running migrations against a live site. SubSonic's migrations are really there to make development simpler and should never be used against a live environment. To be honest even using SubSonic.Schema.Migrator you're still going to bump into the fact that refactoring databases is an incredibly hard problem. For example renaming a column in a table using management studio is trivial, but what happens in the background involves creating an entirely new table and migrating all the constraints, data etc. before renaming the new table.
The most effective way I've found for dealing with this is:
Script all database changes as you make them in your development environment (SQL Server Management Studio will do this for you) and add these scripts to your source control.
As part of deployment (obviously backup first) run the migration scripts and then deploy the updated application on success.
Whether you use ActiveRecord or SimpleRepository is then down to whether you want the extra features/complexity of ActiveRecord.
Hope this helps
i would use activerecord easy to use and any changes you just run the TT files, you would then just build or publish your slution and done ???? SVN will keep your multiple versions of the build stage so if you make a tit of it you just drop back a revision.

Resources