I am having a public file url stored in an object as a custom field. Is there a way to to get the ContentDocumentId of this file? I need to relate this file to another object.
I tried to look around but no luck so far.
I could find it based on salesforce docs and googling around.
Related
I am working on integrating a 2sxc content WebAPI feed into a ReactJS application.
I have managed to get a JSON feed of data into the application, and am in the process of mapping out the data.
I'm wondering what the best practice would be to "resolve" a URL which is coming through as a DNN Page/ Tab ID.
Below I will showcase the various points this is referenced...
First the Setup of the entity / data types...
Then this is an example entry with the data filled out... The page link / URL is set up to point to another internal page on the DNN website:
Finally you can see this data item come through as a JSON feed via the 2sxc API:
What is the best way to convert this piece of data into a URL which can be used in a SPA type application?
There isn't any "server-side" code going on, just reading a JSON feed on the client side...
My initial idea would be to parse this piece of data in JS, to extract the number then use something like this:
http://www.dotnetnuke.com/tabid/85/default.aspx
http://www.dotnetnuke.com/default.aspx?tabid=85
I was hoping someone with more experience would be able to suggest a better / cleaner approach.
Thanks in advance
If you were server-side in Razor you'd be doing something like this:
#using DotNetNuke.Common
View List
XXXX = Dnn.Tab.TabID or define a string with the tab id you want
I seem to have a vague memory that I saw somewhere that Daniel (2sxc) has a way to use Globals.NavigateUrl() or similar on the client side, but I have no idea where or if I did see that.
The Default.aspx?tabid=xx format will certainly work, as it's the oldest DNN convention and is still used in fallbacks. The urls aren't nice, but it's ok.
The reason you're seeing this is because the query doesn't perform the automatic lookup with the AsDynamic(...) does for you. There is an endpoint to look them up, but they are not official, so they could change and therefor I don't want to suggest that you use them.
So if you really want a nicer url, you should either see if DNN has a REST API for this, or you could create a small own 2sxc-api endpoint (in the api folder) just to look that up, then using the NavigateURL. Would be cool if you shared your work.
I am following the qooxdoo tweets tutorial and in the part 3 it's seem the public timeline url is dead.
Is there another url or we can retrieve the data to perform the tests
It seems that this location got lost when we migrated the website from 1&1 infrastructure, BUT you can find the respective file here ...
https://github.com/qooxdoo/qooxdoo/blob/branch_5_0_x/component/tutorials/tweets/step4.5/source/resource/tweets/service.js
I will fix the tutorial.
I don't see any updated answer on similar topics (hopefully something has changed with last crawl releases), that's why I come up with a specific question.
I have an AngularJS website, which lists products that can be added or removed (the links are clearly updated). URLs have the following format:
http://example.com/#/product/564b9fd3010000bf091e0bf7/published
http://example.com/#/product/6937219vfeg9920gd903bg03/published
The product's ID (6937219vfeg9920gd903bg03) is retrieved by our back-end.
My problem is that Google doesn't list them, probably because I don't have a sitemap.xml file in my server..
In a day a page can be added (therefore a new url to add) or removed..
How can I manage this?
Do I have to manually (or by batch) edit the file each time?
Is there a smart way to tell Google: "Hey my friend, look at this page"?!
Generally you can create a JavaScript - AngularJS sitemap, and according to this guidance from google :
https://webmasters.googleblog.com/2015/10/deprecating-our-ajax-crawling-scheme.html
They Will crawl it.
you can also use Fetch as Google To validate that the pages rendered correctly
.
There is another study about google execution of JavaScript,
http://searchengineland.com/tested-googlebot-crawls-javascript-heres-learned-220157
I am trying to hit the courier companies website from Controller ( e.g bluedart,fedex etc) by passing the courier tracking number and fetch status of the given tracking number.
I am using $HttpSocket->get/post to hit the webpage URL
I am able to display the response body
How can I fetch the data from the response.
Or is there any other way to achieve the same
Please help me out .
How can I fetch the data from the response.
Parse the result, either using regular expressions or the DOMDocument class and traverse it. See Parsing HTML in Cakephp as well.
Or is there any other way to achieve the same
Use the APIs these companies usually offer.
I have a doubt, i've been working with salesforce for a while and now i have a requirement from a customer.
They need that some custom fields be populated with a value of parent object, making some research on stackoverflow, i found this post, but this isn't working for me because my project is a manage package and when this is installed on a another salesforce instance the id of custom field change.
if someone could help me, I will be grateful.
Thanks!.
I can't see any way of doing the same as that post without using the IDs, I was thinking you could route via a VF page and build up the URL in the controller but it doesn't seem as though you can get the IDs of fields, just their type etc..
I think the best you could do in this instance is to override the default new recordpage with a visualforce page. In the constructor of your controller you could then loop through the page parameters and pre-fill the corresponding fields on the new record before it's displayed on screen. Using fieldsets or just an <apex:Detail> component would keep the level of effort down and also maximise the flexibility of the page for the end users.