I have a Google App Engine Java running and would like to add PayPal payment.
Downloaded and build the App Engine toolkit
- Not much help there, missing "gallery.jsp" file and non working samples.
Has anybody actually been able to get a simple sample running.
Scenario:
Have a user pay for a service/digital goods or anything, and then get to know what the user bought. In my senario they will buy days for a service - therefore they pay per day or subscribe. But man, it does not seem to be possible for one developer.
Further more, Google Wallet is no better. Is it me or is it really that hard, or is it just for bigger companies who can throw x numbers of man hours on this?
Any information is welcome, but every information I have seen is really obfuscated for what ever reason.
Thanks in advance
Regards
Related
I have some devices I want to give my clients. E.g. they take it home.
basically I want them to be able to ask the device like an echo dot:
Ask MYAPP, what song is Number One
Ask MYAPP, what song is Number Two
.. etc
and then it reads the name of a song.
My question is: I have never worked with alexa or amazon service.
How long will it take to get it certified?
Do i need to get it certified?
is there an issue with playing a song?
I don't own a device, can I test it well enough without owning one?
is the alexa skill api easy and I get this done rather quickly or is it difficult to get started?
what's a good place to help get me started? i quickly looked at creating a skill set and the procedure seems heavy weight. Is there maybe a forum or some chat where the gurus hang out?
How long will it take to get it certified? - Once you submit the app it will take max 7 business day to get certified (Most of my apps certified in 2 days) - Please read here for certification checklist
Do I need to get it certified? - Yes, it should get certified for your app to be available on Amazon Alexa skill store. If it is not in skill store then other people cannot download to their device and will be available only in your account. To test app you don't need certification as you can try it from your Amazon account
is there an issue with playing a song? - You can play any audio files but current limit of audio file is 90 seconds. Please read more here
I don't own a device, can I test it well enough without owning one?- You don't need a device to test it. You can use echosim - https://echosim.io/ to test your app. Alternatively you can use Raspberry PI as you can enable Raspberry PI as an Alexa enabled device
is the alexa skill api easy and I get this done rather quickly or is it difficult to get started? - It is very easy to do. trust me I have learned and created an app in a week or so
what's a good place to help get me started?- First you need an Amazon account ( I believe you already have). Please find below links for simple end to end samples,
https://developer.amazon.com/alexa-skills-kit/alexa-skill-quick-start-tutorial
https://developer.amazon.com/blogs/post/Tx3DVGG0K0TPUGQ/New-Alexa-Skills-Kit-Template:-Step-by-Step-Guide-to-Build-a-Fact-Skill
There are couple of courses available in Udemy as well
Since this question is referred as related to some current #alexa-skill questions, I like to give some updates to the different points where Amazon has improved the Alexa environment within the 5 years after the initial answers:
Do I need to get it certified? Beside publishing a skill to all Alexa users, there are some other possibilities. You could add further users to your account (but be aware then they see all skills and depending on the roles might also do changes). Another option on skill level access is beta testing, but this is very limited. Last option is Alexa for business, where a skill can be distributed to devices of an organization - this is quite complex, but offers additional context and the option to limit accessibility of the skill to just the organization.
is there an issue with playing a song? besides integrating the audio in SSML, you have the Audio Player Directive, but be aware, that while your audio length has no limits, you leave the skill session. With Alexa Presentation Language Audio (APL-A) you keep the dialog session and have more audio capabilities as on SSML, but still face length limits. Staying inside the skill while not having audio length limits is possible when using APL (Video-)Player component with size 0, but this limits your skill to screen devices.
I don't own a device, can I test it well enough without owning one? The previous answer is not valid, since echosim.io is offline since April 5, 2021. But nowadays the development console has a very good simulator. Additionally you can use a local simulator with Visual Studio Code & ASK Toolkit
testing
is the alexa skill api easy and I get this done rather quickly or is it difficult to get started? In the last view years, Amazon extended the options on how to build & host a skill. With Alexa-hosted you do not need to care about AWS and connecting Alexa cloud with AWS or your own hosting solution and make use of all Skill features. If you need a simpler logic, you could use Alexa Blueprints, which covers the logic and you just provide the content (if you found a matching blueprint for your needed logic) - btw this is also an additional option for the certification question, since a blueprint is normally just for your account and you can share your blueprint instance with others, too.
I have published an app built with React Native. Currently it's iOS only, but eventually may be released for Android as well. I'd like a cross-platform solution to remotely assist customers that run into bugs, crashes or any unexpected behavior. While the app could continuously log everything to a server, I've found that that's not very helpful since customers usually have very specific points in time that they need help with. Sifting through continuous logs is time consuming and generally a waste of resources.
My hope is to give the user the ability to press a button to send the stack trace, the last N minutes worth of logs, etc directly to me. This wouldn't work in the case of a hard crash of course. The vast majority of the time the app is functional when there's something they need help with.
A pie-in-the-sky idea would be to let the user share their screen with me.
Found this related question but it doesn't fully encompass what I'm trying to accomplish:
Release mode diagnostics in React Native
BugSnag looks promising. It's a paid service.
https://www.bugsnag.com/platforms/react-native-error-reporting
I tried BugSnag and a few other services. In the end, Sentry has the most reliable and simplest RN library. It's also free for the Developer plan (5k errors per month is plenty enough for us, and supports multiple apps).
https://sentry.io/pricing/
I have been using Google Analytics for some time now but it doesn't work for today (it tells me how many views I had yesterday and this month et cetera but it won't show me what the statistics for today are). I have been experimenting with my HTML code but I don't know if I made a mistake doing that. I am using the 'official' tracking code my Google Analytics account gives me.
I hope somebody'll be able to help me.
My website is www.lemontierres.com
Friendly regards
Statistics and reports for the current day are delayed in Analytics. It is not real time.
I'm looking at different options to get the sales reports and other data out of the iTunes Connect website. Since Apple doesn't provide an API, all the solutions I found are based on scraping the page.
As I need the information for a product that we offer, I'm not that happy to give all the iTunes accounts to a 3rd party service. This is why I want to scrape it myself or use a product that runs on our servers.
My questions are:
does someone have experience how frequent apple is changing the web front-end?
has someone experience in maximum request from one server to the site? I'm afraid of being baned by apple.
anything else I have to have in mind that will cause serious trouble?
Just if someone is interested in the tools I looked at, here is a list:
Services:
http://www.appfigures.com (has API)
http://www.itunesapis.com
http://www.appannie.com/
http://www.heartbeatapp.com
Products:
http://www.appclix.com (has a enterprise licence that runs on your own server, includes API. Tends to me more a mobile analytics tool in general)
http://www.ideaswarm.com/products/appviz/ (Mac enduser app)
Open Source Tools:
http://code.google.com/p/appdailysales/
http://metacpan.org/pod/WWW::iTunesConnect
http://www.rogueamoeba.com/utm/2009/05/04/itunesconnectarchiver/
http://github.com/kasatani/iphone-stats
http://bfoz.net/projects/itc/
http://sourceforge.net/projects/itunesanalytics/
UPDATE:
I started using Kirby's python script (https://github.com/kirbyt/appdailysales) and it works very well.
does someone have experience how frequent apple is changing the web front-end?
I can't speak for all of iTunes Connect, only downloading daily sales reports. My script was rock solid and didn't require a single change between November 2009 and September 2010. This changed in September 2010 when Apple rolled out the new web site. This broke the old script, and a new one had to be written. Since rolling out the new web site, I make changes every few days to handle the tweaks from Apple. I'm hoping the tweaks will end soon.
Take a look at the download page for appdailysales.py. The dates will give you a general idea of how often I make changes to the script.
https://github.com/kirbyt/appdailysales
Again, this is only for daily sales reports. I'm not sure how frequently others areas of iTC change.
has someone experience in maximum request from one server to the site? I'm afraid of being baned by apple.
I've not experienced this, but my server runs the script only once a day. I frequently hit the iTC when working on the script, but not enough to cause a load on Apple's servers.
anything else I have to have in mind that will cause serious trouble?
I don't know what might get you in trouble with Apple, but one thing that does cause a serious headache is changes to the web site. While the new version of the web site makes screen scraping the site easier, it did involve writing a new script. Apple does not give you a heads up that they are changing something. You find out after the fact when something in your screen scraper breaks.
If you depend on the data daily, then you have to drop everything and make the necessary fixes. And there is nothing stopping Apple from rolling out another new site sometime in the future.
Hope that helps.
-KIRBY
I'm using AppSalesMobile on iPhone. It get's updated pretty quickly. Another script I use is salestrends.sh that just downloads the reports in a folder for easy import into databases etc.
If you're also interested in finding out, in which countries an app is featured, you can use my iTunesFeaturedCheck script.
Also check out this question with more links.
You might also try the Autoingestion tool from Apple. Documentation here.
appdailysales is the best tool out there that I have found.
I have modified it so that the script automatically puts the ITC data into a MySQL database instead of just saving the txt files. And as Kirby pointed out, I too only have it run once a day and everything appears to be working. Nothing has been blocked by Apple so far.
As for the script breaking, the one good thing is that Apple keeps daily sales reports for 14 days (last I checked). This means that if the script breaks, one has several days to fix the script and still get the daily sales reports.
Good luck.
Kevin
I'm looking to learn about running my own google wave server. There are videos on how to set it up and get it in the command line, but my question is.. okay - where do you go from there? How do you take this service that is running in the command line and apply it to the web? Is there documentation on doing just that?
I have looked at the embedded API, but I do not think that's what I want. I'd also love for the frontend to be built in PHP - would anyone have any idea how to communicate PHP to Wave?
Thanks,
Matt Mueller
Okay ya'll. I emailed a few of the key Google Wave developers and surprisingly one of them responded! Here's what he said:
"Thanks for contacting me.
Unfortunately there's still a big gap
between the code we have opened so far
and building a UI. The conversation
model describes how to interpret a
wave as a conversation but we have yet
to open up the code that does that (we
will though!). So it would be a big
challenge at the moment."
So we can only wait I suppose!