The documentation says:
For public read-only and anonymous resources, such as getting image info, looking up user comments, etc. all you need to do is send an authorization header with your client_id in your requests. This also works if you'd like to upload images anonymously (without the image being tied to an account), or if you'd like to create an anonymous album. This lets us know which application is accessing the API.
But it's not clear what the "etc." part includes. It's not clearly defined in the documentation. I have been using Postman to debug before putting this down in AngularJS -- this is just an AngularJS learning project for me -- and I can't get things like the gallery images for a particular subreddit. I get a status 403 back.
Anyone ever done this before?
As it turns out, if you register for an API key it doesn't take effect right away. The API key you get may not work for a few hours. (I'm guessing this is worst-case.)
It works now, with the exact same code and no changes.
Related
I am using Angular SPA with DTM.Using custom event based rules, I am able to get all my data including pageName, v41,v42 as correct. Now inside adobe editor, i am storing pagename to s.pageName and some hard-coded value to s.server. I have verified that all my data is correctly populating using OMNIBUG tool as server,pageName, v41 and v42.
Problem is coming in Omniture reporting, as server and page data are not coming through. Page-name data only showing SPA homepage in all page visits and server also coming as default from s.code and not the one i am passing from s.server. eVar/prop are all coming fine.Even if I do prop40=s.pageName/prop41=s.server, then in omniutre reporting i am seeing correct data populating in prop40/prop41 but not under Page and server. And again I cant use prop40/prop41 for pagename/server as its not a correct way to follow and PAGE-VISITS are ZERO in that case.
Any help how to get data in page/server in omniture for SPA or anything wrong in my implementation? Thanks in advance!!
If you really do see the correct values in Omnibug (or more specifically, network request to Adobe collection server), then the issue is not in the code.
Check against another AA hit debugger. Possible Omnibug is somehow bugging out. There are a ton of alternatives out there. Adobe Experience Cloud Debugger. Observepoint. Charles Proxy. Fiddler. Or just use the browser dev tool network tab (what I usually do as a backup).
Make sure you are looking in the correct report suite. Perhaps your data is being sent to a dev report suite, and you are looking at prod report suite, or visa versa?
Check to see if you have any Processing Rules that are overriding your values.
Contact your Adobe Rep to check if there are any VISTA Rules present for the report suite, that are overriding your values.
If you have verified none of the above is the case, then sorry, but it sounds like the issue must really be in your code, but there is a problem with your QA method (e.g. maybe you are looking at the wrong AA request, or something).
Update:
Based on your comment:
Earlier, i was making s.tl() call, but replacing it with s.t() call
resolved my problem for data was not populating
pageName/server/page-views in Omniture and now it is. But the current
problem is we need PageName on all SPA clicks (can be achieved by
s.t() call ) , but the page-Views are not needed on all clicks. So,
its like link-tracking needed only but with PageName data. I am
struggling not to populate page-views on a s.t() call or vice-versa
how to get PageName populated on s.tl() call. Again, omnibug shows all
requests just fine but the issue comes in reports in omniture
When Adobe processes a hit, it wipes pageName for s.tl calls, as that's how it determines whether to count the request as a page view or not. If you want to see page name even for s.tl calls, the common practice is to dupe the pageName value to a prop or eVar and send in with the s.tl call, and look at that report. In fact, most clients I work with don't even use the native pages report, and instead use the (usually eVar) report.
I am working on integrating a 2sxc content WebAPI feed into a ReactJS application.
I have managed to get a JSON feed of data into the application, and am in the process of mapping out the data.
I'm wondering what the best practice would be to "resolve" a URL which is coming through as a DNN Page/ Tab ID.
Below I will showcase the various points this is referenced...
First the Setup of the entity / data types...
Then this is an example entry with the data filled out... The page link / URL is set up to point to another internal page on the DNN website:
Finally you can see this data item come through as a JSON feed via the 2sxc API:
What is the best way to convert this piece of data into a URL which can be used in a SPA type application?
There isn't any "server-side" code going on, just reading a JSON feed on the client side...
My initial idea would be to parse this piece of data in JS, to extract the number then use something like this:
http://www.dotnetnuke.com/tabid/85/default.aspx
http://www.dotnetnuke.com/default.aspx?tabid=85
I was hoping someone with more experience would be able to suggest a better / cleaner approach.
Thanks in advance
If you were server-side in Razor you'd be doing something like this:
#using DotNetNuke.Common
View List
XXXX = Dnn.Tab.TabID or define a string with the tab id you want
I seem to have a vague memory that I saw somewhere that Daniel (2sxc) has a way to use Globals.NavigateUrl() or similar on the client side, but I have no idea where or if I did see that.
The Default.aspx?tabid=xx format will certainly work, as it's the oldest DNN convention and is still used in fallbacks. The urls aren't nice, but it's ok.
The reason you're seeing this is because the query doesn't perform the automatic lookup with the AsDynamic(...) does for you. There is an endpoint to look them up, but they are not official, so they could change and therefor I don't want to suggest that you use them.
So if you really want a nicer url, you should either see if DNN has a REST API for this, or you could create a small own 2sxc-api endpoint (in the api folder) just to look that up, then using the NavigateURL. Would be cool if you shared your work.
Since some time google officially depreceated ajax crawling scheme they came up with back in 2009 so I decided to get rid of generating snapshots in phantomJS for _escaped_fragment and rely on google to render my single page app like a modern browser and discover its content. They describe it in here. But now I seem to have a problem.
Google indexed my page (at least I can see in webmastertools it has) but when using webmastertools I look at google index-->content keywords it shows non processed content of my angularJS templates only and names of my binded variable names e.g. {{totalnewprivatemessagescount}} etc. The keywords do not contain words that should should be generated by ajax calls when Javascript executes so e.g. fighter is not even in there and it should be all over the place.
Now, when I use Crawl-->Fetch as google-->Fetch and render the snapshot what google bot sees is very much the same as what user sees and is clearly generated using Javascript. The Fetch HTML tab though shows only source without being processed using JS which I'm guessing is fine.
Now my question is why google didn't index my website properly? Is there anything I implemented incorrectly somewhere? The website is live at https://www.fightersconnect.com and thanks for any advice.
I have a function in the back-end that relies on the property names of an object, which is sent using AJAX with AngularJS. Can a user alter the property names using a debug tool, therefore changing what I would normally expect in the back-end? I suppose doing that would also affect the entire app in general if it was possible.
I guess it's kind of like someone using a debug tool to change the name attribute on a form and then submitting it. So I was curious to know if it's something I should ever keep in mind for AngularJS. I hope that makes sense.
If user is smart enough, he or she can change mostly everything using developer tools browser brings. What is more, if back-end endpoint is known, it easy to mock custom request with custom data.
You should always validate request since everything what doesn't come directly from your code can lead to security break.
The big downside of Ajax is that its requests are easily debugged using dev tools and, if are not designed correctly, expose your internal structures.
in order to make nice urls, I decide to remove the # from my ulrs using the tip from the following question Removing the fragment identifier from AngularJS urls (# symbol) Now, I realized that my urls are not working if I try a direct access to them. from the given example in the related question if put the url below directly in the browser http://localhost/phones
I'll get a 404.
Any idea how to solve this?
You need to write server side code that will generate the page that you were previously depending on the client-side JavaScript to generate.
This will then be the initial view for that URL.
If the client supports JavaScript (and the JavaScript doesn't fail for any reason) it will then take over for future interactions.
Otherwise, the regular links and forms you (should) have in the page will function as normal without JS.