Get Page url on block types - episerver

I am working on generating a report in episerver in the form of a scheduled job.This report is basically used to get all the contents(page types,blocks,media) within Episerver.
There are some good nuget packages for content usages but for some reason& to have more control & plans for tweeking & extending it further,i am creating a custom one rather than using the available 3rd party packages.
The scheduled job is kind of similar to
https://www.codeart.dk/blog/2018/12/content-report-generator/ .
The article helped me a lot to get my report working with some modifications as per my requirement.
The one thing that I am struggling here is to get the URL of where the block is. Now this is a point of debate here.
I am aware that the blocks being shared in nature can be used anywhere in a site but the point is they are still used in pages or as a matter of fact in other blocks which is turn is used on a page.What i am trying to say here is,they directly or indirectly are part of a page.So is there a way to get the page url of a block irrespective of how many pages they are in.
Every forum I have looked at ,there's always page url of Pagedata or mediadata & nothing on blockdata.Even 3rd party nuget packages that i have looked for does not have page url for block types.
I do understand that there is nothing out of the box here.Is there a way to achieve this ie get the page url of a specific block type which can be a list of page urls if the block is used in multiple pages.
Recursive function to reach the page:
private string GetPublicUrl(IContentRepository contentRepository, IContentSoftLinkRepository contentSoftLinkRepository, ContentReference contentReference)
{
var publicUrl = string.Empty;
var content = contentRepository.Get<IContent>(contentReference);
var referencingContentLinks = contentSoftLinkRepository.Load(content.ContentLink, true)
.Where(link => link.SoftLinkType == ReferenceType.PageLinkReference && !ContentReference.IsNullOrEmpty(link.OwnerContentLink))
.Select(link => link.OwnerContentLink);
foreach (var referencingContentLink in referencingContentLinks)
{
publicUrl = UrlResolver.Current.GetUrl(referencingContentLink.GetPublicUrl()) ?? GetPublicUrl(contentRepository, contentSoftLinkRepository, referencingContentLink);
}
return publicUrl;
}
I have written this recursive function to reach the page ,but this works only when there is a single level. For instance A 2Col block on a page.
If I have a block say Download Block which is on a 2Col block which in turn is on a page ,then in this case the url is empty.
Any input is appreciated.

With IContentSoftLinkRepository you can find where the blocks are used. You can check whether the SoftLink points to a page with SoftLink.SoftLinkType == PageLinkReference and then use IUrlResolver to get the page's URL.

Related

How to automate download of generated PDFs

Scenario:
We are required to enter data daily into a government database in a European country. We suddenly have a need to retrieve some of that data. But the only format they will allow is by PDFs generated from the data—hundreds of them. We would like to avoid sitting in front of a webbrowser clicking link after link.
The links generated look like
<a href='javascript:viajeros("174814255")'>
<img src="img/pdf.png">
</a>
I have almost no experience with Javascript, so I don't know whether I can install a routine as a bookmark to loop through the DOM, find all the links, and call the function. Nor, if that's possible, how to write it.
The ID numbers can't be predicted, so I can't write another page or curl/wget script to do it. (And if I could, it would still fail as mentioned below.)
The 'viajeros' function is simple:
function viajeros(id){
var idm = document.forms[0].idioma.value;
window.open("parteViajeros.do?lang="+idm+"&id_fichero=" + id);
}
but feeding that URI to curl or wget fails. Apparently they check either a cookie or REFERER and generate an error.
Besides, with each link putting the PDF in a browser tab instead of in the downloads directory, we would still have to do two clicks (tab and save) hundreds of times.
What should I do instead?
For what it's worth, this is on MacOS 10.13.4. I normally use Safari, but I also have available Opera and Firefox. I could install Chrome, but that's the last resort. No, that's second to last: we also have a (shudder) Windows 10 laptop. THAT'S last resort.
(Note: I looked at the four suggested duplicates that seemed promising, but each either had no answer or instructed the asker to modify the code that generates the PDF.)
document.querySelectorAll("img[src=\"img/pdf.png\"]")
.forEach((el, i) => {
let id = el.parentElement.href.split("\"")[1];
let url =
"parteViajeros.do?lang=" + document.forms[0].idioma.value +
"&id_fichero=" + id;
setTimeout(() => {
downloadURI(url, id);
}, 1500 * i)
});
This gets all of the images of the PDF icon, then looks at their parent for the link target. This href has its ID extracted, and passed to a string construction making the path to the file to be downloaded, similar to ‘viajeros’ but without the window.open. This URL is then passed to downloadURI which performs the download.
This uses downloadURI function from another Stack Overflow answer. You can download a URL by setting the download attribute on the link, then clicking it, which is implemented as so. This is only tested in Chrome.
function downloadURI(uri, name) {
var link = document.createElement("a");
link.download = name;
link.href = uri;
document.body.appendChild(link);
link.click();
document.body.removeChild(link);
delete link;
}
Open the page with the links and open the console. Paste the downloadURI function first, then the code above to download all the links.
I had a similar situation, where I have to download all the (invoice) pdf that were generated in a day or past week.
So after some research I was able to do the scraping using PhantomJS and later I discovered casperjs which made my job easy.
phantomJs and casperjs are headless browsers.
Since you have less experience with JS and
If you are a C# guy then CefSharp may help you.
Some Useful links:
To get started with phantom, casper and cefSharp
PhantomJs
CasperJs
CefSharp
Try reading the documentation for downloading files.

Exclude Page from being tracked in Omniture

Is there a way to remove single page from being tracked in Omniture(Site Catalyst). I want to track all pages instead of only one page. I don't want to send request to Omniture if that page has been hit.
Is there a way to update script for that?
Thanks
Yes, there is a way. If you click the cog next to the Adobe Analytics Tool on the Overview page in DTM you'll be taken to a settings page. The tab on the bottom titled Customize Page Code, this will run before your adobe beacon would normally be sent. Simply put your logic to select the correct page and then return false and your tracking beacon wont fire.
if (location.pathname =='whatever') {
return false;
}
**If you don't have any idea what you're doing codewise, be careful. There is a lot of breakage and data loss that can occur if users put bad code here.
If you're unfamiliar with javascript, I would suggest seeking out a little help. If you want to give it a shot but you're unsure of your code, I would suggest navigating to the page that you want to remove, opening the console (F12), and typing location.pathname. This will give you the exact string to test again so your code should be:
if (location.pathname == 'whatever/string/the/location/pathname/gave/you') {
return false;
}
Make sure you don't forget the quotes. Good luck!

Smoothstate : Get $(document) in onReady

How to get to $(document) in the onReady method ?
I've tried with smoothState.cache[smoothState.href] but couldn't make it works.
(fantastic plugin)
Thanks
I'm assuming you want to get at the document for the new page that's being loaded. Technically, $(document) exists in onReady, and it's the first document you visited on the site. After that, smoothState just dynamically updates it by swapping out content. So whatever existed on the previous page you were viewing is there as normal. Once the line $container.html( $newContent ); runs in your onReady method (assuming your code follows all the smoothState examples), you new content should be available.
However, if you want to get at the actual, full document for the new page that got loaded up, not just what's contained in your wrapper div that gets swapped out, it's contained in smoothState.cache[smoothState.href].doc. It's got the header, the body, everything.
A little reading of the smoothState source code shows that you can pull it into a useful format this way:
var $newDoc = $("<html></html>").append( $(smoothState.cache[smoothState.href].doc) );
At this point, you can run find queries or whatever you need to go look through things.

module block view only prints if it is on a certain page

In Drupal 7 Is there a way for me to insert my block into a region only on certain pages inside of a module code? Or do I have to do that in the gui block list?
I've created a banner module, but want to be able to give the ability to choose the pages it appears on. For starts, it could appear only on the front page. I tried a $is_front check, but I am getting an error that $is_front or $variables are undefined.
This doens't work inside of my block_view() function in my module.
if ($is_front){
$block['content'] = theme('mydata', $banner_node_list);
}
I think your best bet is to use the block GUI to select where it appears. I can't see any benefits to doing it in the code when it's already built in to be honest.

HTML Bridge not working with cross-domain Silverlight XAP

I've got a complex Silverlight app that uses the HTML bridge functionality quite extensively (in both directions). The app runs fine when the hosting page is from the same domain as the XAP source. Unfortunately, I can't get the HTML bridge functionality to work when the hosting page is on a different domain.
Now, I know the various tricks normally required to get this to work, i.e., everything that's documented here: http://msdn.microsoft.com/en-us/library/cc645023(VS.95).aspx. I've even put together my own simplified cross-domain repro that I was hoping would highlight the problem, but unfortunately, my "repro" works, i.e., both JS->SL and SL->JS functionality work just fine in it, even if the XAP is hosted on a different domain.
Here's what I've tried so far to narrow down the problem:
On my production solution (where I'm having the problem):
Confirmed that "EnableHtmlAccess" is set to true in the <object> tag.
Confirmed that "ExternalCallersFromCrossDomain" is set to "ScriptableOnly" in the AppManifest.xml file.
On my repro solution (where I can't get it to have the problem):
Added multiple libraries with multiple registered scriptable objects.
Added events to the registered objects.
On both:
Tried it with a static <object> tag and with a dynamically created <object> tag (via Silverlight.js).
Tried it with and without specifying handlers for onSourceDownloadProgressChanged, onSourceDownloadComplete, onError, and onLoad.
Tried it with and without a splashscreen.
I'm kinda running out of ideas. Anyone have any suggestions for other troubleshooting steps?
Well, so far I haven't been able to track down the precise difference between the working and the non-working versions. But I came up with a workaround that's sufficient for my needs. As it turns out, only the JS->SL functionality was broken; any calls from SL->JS still worked. So what I did was to register the scriptable SL objects from within Silverlight. In my controlling JavaScript class, I created a function with a unique name, and registered it with the window object:
var mLoadingController;
var mAppId = 'alantaClient_' + Alanta.makeId();
var mSetLoadingControllerId = mAppId + '_SetLoadingController';
window[mSetLoadingControllerId] = function (value) {
mLoadingController = value;
onLoad();
};
And then I pass in the name of the function as a part of the Silverlight app's InitParams:
var initParams = 'setLoadingControllerId=' + mSetLoadingControllerId;
Silverlight.createObject(mSource, mAppHost, mAppId, params, events, initParams);
And then I call that registration function from within Silverlight, like so:
// Do everything necessary to make the LoadingController scriptable.
HtmlPage.RegisterScriptableObject("LoadingController", LoadingController.Instance);
string setLoadingControllerId;
if (e.InitParams.TryGetValue(LoaderConstants.SetLoadingControllerIdReference, out setLoadingControllerId))
{
HtmlPage.Window.Invoke(setLoadingControllerId, LoadingController.Instance);
}
And then I can call it from JS, like so:
mLoadingController.GoToRoom();
Kinda hacky, but it works. Close enough for now.

Resources