I try to import an old typo3 v4 into v10 and I'm using external_importer extension for the job. On the flow I would like to download the internal files like PDF and relink in bodytext.
The idea would be to transform the saved content to real html and evaluate the hyperlinks if are containing relative PDF links and in case trigger the download and rebuild the link to the file.
How would I proceed in this case?
I tried the following
$parseObj = \TYPO3\CMS\Core\Utility\GeneralUtility::makeInstance(ContentObjectRenderer::class);
$html = $parseObj->stdWrap_HTMLparser($htmlStr, []);
DebugUtility::debug($html);
but the hyperlink steel remains as <link http://someurl.com>
I had a similar problem. If the solution is the same with mine, you are halfway there. You are missing the reference. Meaning, how should TYPO3 process the text. Here is what worked for me.
TYPO3 render full t3:// links from bodytext in utility files
First, use the parseFunc and not the stdWrap_HTMLparser. Then use this reference: lib.parseFunc. At the end you should have something like that:
$parseFuncTSPath = 'lib.parseFunc';
$html = $parseObj->parseFunc($htmlStr, [], '< ' . $parseFuncTSPath);
DebugUtility::debug($html);
And since you are using TYPO3 10, i would recommend to use DI (Dependency Injection). You can basically copy paste the code from the linked SO answer i pasted.
Best regards
Related
I tried to use an APOC procedure from here to export the DB using the following:
CALL apoc.export.json.all("all.json",{useTypes:true})
I can successfuly export to JSONL. However, I am not able to change the JSON format to other available formats such as JSON_LINES, ARRAY_JSON, JSON or JSON_ID_AS_KEYS. According to the documentation the following should work but it does not:
CALL apoc.export.json.all("all.json",{config:{jsonFormat:'ARRAY_JSON'}})
The result of above procedure is in JSONL but not ARRAY_JSON.
I have also tried the solution here but did not succeed.
Cheers,
A
This is working now in neo4j versions 4.2.x with APOC version 4.2.0.2: https://github.com/neo4j-contrib/neo4j-apoc-procedures/releases/download/4.2.0.2/apoc-4.2.0.2-all.jar
The syntax is simpler. Notice the config is a dictionary rather than a nested dictionary. See my sample below.
OLD: CALL apoc.export.json.all("all.json",{config:{jsonFormat:'ARRAY_JSON'}})
NEW: CALL apoc.export.json.all("all.json", {jsonFormat: 'ARRAY_JSON'})
Result:
(type is array of dictionaries)
The solution for my question was changing the APOC.jar file with the latest release from here. I have had to update the syntax as well.
CALL apoc.export.json.all("all.json", {jsonFormat: 'ARRAY_JSON'})
You can open the plugin folder using the tree dots by the blue open button and chose Open Folder -> Plugins. Copy and paste the path showed into your file manager if it doesn't open.
In the plugin folder, you can see your APOC version on the apoc file name.
The thing is I have found how upload a document and after that downolad it. But I just want to download it. I want to do it using the UI designer but I dont know how to do it.
Thanks :)
I dont know which tool are you using to design your UI, anyway this is concerning functionality, not design. In that point, i need to know wich language do you want (or can) use. For example, in PHP, it's very simple, you can make something like:
(create php file) downloadpdf.php
1st: (if you want to generate pdf "on the fly":
<?php
function download($foo){
content headers (type, force-download, etc)
database select to get data or harcode it.
echo data
}
?>
and call this function with some id to select from database or something (ignore if you want to hardcode it)
Other option to download a file, if it's stored on server is making a link to this file (statically or dyamically). If you wanna take control to file downloads, check this post:
http://www.media-division.com/the-right-way-to-handle-file-downloads-in-php/
I don't mean that it can be done with UI designer tools, and it's not concerned if it's from a form or not.
Cheers!
You should create link and variable which type is javascript expression. On Variable value write
return "/bonita/portal/" + $data.context.mainDoc_ref.url;
On link URL write your variable and to text
Download: {{context.mainDoc_ref.fileName}}
Here you can find excellent example for this case
Using CakePHP 2.6.7
When generating a PDF using wkhtmltopdf I can simply run from the command line wkhtmltopdf http://url/of/my/website some_name.pdf but I can find no way to pass a URL in this manner through CakePdf when using the Wkhtmltopdf engine. The closest I can get is using file_get_contents(http://my/website) and then manually going through the result and turning relative URLs for stylesheets/scripts into full URLs.
Is there a way to pass a URL to CakePdf? Alternatively, what would be the best way to go through a load of HTML and turn the relative links into full ones?
Partial Solution
For the moment I have managed to use the following code to manipulate the links on the page and replace them with the full urls.
$parsed_web_address = parse_url($this->request->data['ArmawareHtmlToPdf']['web_address']);
$root_web_address = $parsed_web_address['scheme'] . '://' . $parsed_web_address['host'] . '/';
$html_string = str_replace('href="/', 'href="' . $root_web_address, $html_string);
$html_string = str_replace('src="/', 'src="' . $root_web_address, $html_string);
No. As far as I'm aware you can't pass a URL through CakePdf. The entire purpose of the plugin is to convert html generated from your Cakephp app to wkhtmltopdf. So it wouldn't make much sense to pass a url to a external html source.
My suggestion would be to use wkhtmltopdf directly using proc_open(). Take a look at the WkHtmlToPdfEngine.php file in the CakePdf plugin for an example of using this with wkhtmltopdf.
http://php.net/manual/en/function.proc-open.php
I am using the Drupal 7 Migrate module to create a series of nodes from JPG and EPS files. I can get them to import just fine. But I notice that when I am done importing them if I look at the nodes it creates, none of the attached filefield and thumbnail files contain filename information.
Upon inspecting the file_managed table I see that both the filename and filemime fields are empty for ONLY the files that I attached via the migrate module. This also creates an issue with downloading the files.
Now I think the problem has to do with the fact that I am using "file_link" instead of "file_copy" as the file operation I specify. The problem is I am importing around 2TB (thats Terabytes) of image files. We had to put in a special request with Rackspace just to get access to that much disk space on our server. So I can't go around copying from one directory to the next because of space issues. So "file_link" seems like the obvious choice.
Now you probably want to see how I am doing this exactly, so here is the code snippet:
$jpg_arguments = MigrateFileFieldHandler::arguments(NULL,
'file_link', FILE_EXISTS_RENAME, 'en', array('source_field' => 'jpg_name'),
array('source_field' => 'jpg_filename'), array('source_field' => 'jpg_filename'));
$this->addFieldMapping('field_image', 'jpg_uri')
->arguments($jpg_arguments);
As you can see I am specifying no base path (just like the beer.inc example file does). I have set file_link, the language, and the source fields for the description, title, and alt.
It is able to generate thumbnails from the JPGs. But still missing those columns of data in the db table. I traced through the functions the best I could but I don't see what is causing this. I tried running the uri in the table through the functions that generate the filename and the filemime and they output just fine. It is like something is removing just those segments of data.
Does anyone have any idea what this could be? I am using the Drupal 7 Migrate module version 2.2. It is running on Drupal 7.8.
Thanks,
Patrick
Ok, so I have found the answer to yet another question of mine. This is actually an issue with the migrate module itself. The issue is documented here. I will be repealing this bounty (as soon as I figure out how).
I am creating a download manager for educational purpose. I intend to implement a kind of system that downloads from one-click hosters just like jdownloader or cryptload.
What are the processes/methods involved in extracting the exact download link from the host site? I know this methods may differ from each hosters.
The source code for jDownloader available for everyone so if you have Subversion you can look at it and maybe you'll find how it works.
link
find the value of the href attribute of the a tag that is wrapping the image caption Regular Download
you could use simple_html_dom to parse it
$html = new simple_html_dom();
$html->load($pageHTML);
$x = $html->find('a[class=down_butt1]')
$exact = $x->href;