In a process of writing text to PDF, I'm using TextFragment for setting properties of various fields. Instead of setting for each field separately, how do make use of a loop?
My present code:
TextFragment a = new TextFragment("Hi!");
tf.setPosition(dropDown);
tf.getTextState().setFont(new FontRepository().findFont("Arial"));
tf.getTextState().setFontSize(10.0F);
.
.
.
TextFragment n = new TextFragment("n");
tf.setPosition(dropDown);
tf.getTextState().setFont(new FontRepository().findFont("Arial"));
tf.getTextState().setFontSize(10.0F);
I need something like this:
some loop {
.
.
TextFragment txtFrag = new TextFragment(A);
tf.setPosition(dropDown);
tf.getTextState().setFont(new FontRepository().findFont("Arial"));
tf.getTextState().setFontSize(10.0F);
} //This should set properties for all fields
The string in TextFragment("String") is not same for all the fields. It's different for various form fields.
You may simply add text fragments in your PDF file and once you finish adding text, you may get or set different properties for all the text fragments in a PDF file by using the code below:
// Load document
Document document = new Document( dataDir + "input.pdf");
// Create TextAbsorber object to extract all textFragments
TextFragmentAbsorber textFragmentAbsorber = new TextFragmentAbsorber();
// Accept the absorber for first page of document
document.getPages().accept(textFragmentAbsorber);
// Get the extracted text fragments into collection
TextFragmentCollection textFragmentCollection = textFragmentAbsorber.getTextFragments();
// Loop through the Text fragments
for (TextFragment textFragment : (Iterable<TextFragment>) textFragmentCollection) {
// Iterate through text fragments
System.out.println("Text :- " + textFragment.getText());
textFragment.getTextState().setFont(new FontRepository().findFont("Arial"));
textFragment.getTextState().setFontSize(10.0F);
System.out.println("Position :- " + textFragment.getPosition());
System.out.println("XIndent :- " + textFragment.getPosition().getXIndent());
System.out.println("YIndent :- " + textFragment.getPosition().getYIndent());
System.out.println("Font - Name :- " + textFragment.getTextState().getFont().getFontName());
}
// Save generated document
document.save(dataDir + "input_17.12.pdf");
You may visit Working with Text for more information on this. I hope this will be helpful. Please let us know if you need any further assistance.
I work with Aspose as Developer Evangelist.
Related
I have a Text field that has semicolon separated codes. These code has to be replaced with the description. I have separate map that have code and description. There is a trigger that replace the code with their description. the data will loaded using the dataloader in this field. I am afraid, it might not work for large amount of data since I had to use inner for loops. Is there any way I can achieve this without inner for loops?
public static void updateStatus(Map<Id,Account> oldMap,Map < Id, Account > newMap)
{
Map<String,String> DataMap = new Map<String,String>();
List<Data_Mapper__mdt> DataMapList = [select Salseforce_Value__c,External_Value__c from Data_Mapper__mdt where
active__c = true AND Field_API_Name__c= :CUSTOMFIELD_MASSTATUS AND
Object_API_Name__c= :OBJECT_ACCOUNT];
for(Data_Mapper__mdt dataMapRec: DataMapList){
DataMap.put(dataMapRec.External_Value__c,dataMapRec.Salseforce_Value__c);
}
for(Account objAcc : newMap.values())
{
if(objAcc.Status__c != ''){
String updatedDescription='';
List<String> delimitedList = objAcc.Status__c.split('; ');
for(String Code: delimitedList) {
updatedDescription = DataMap.get(Code);
}
objAcc.Status__c = updatedDescription;
}
It should be fine. You have a map-based access acting like a dictionary, you have a query outside of the loop. Write an unit test that populates close to 200 accounts (that's how the trigger will be called in every data loader iteration). There could be some concerns if you'd have thousands of values in that Status__c but there's not much that can be done to optimise it.
But I want to ask you 3 things.
The way you wrote it the updatedDescription will always contain the last decoded value. Are you sure you didn't want to write something like updatedDescription += DataMap.get(Code) + ';'; or maybe add them to a List<String> and then call String.join on it. It looks bit weird. If you truly want first or last element - I'd add break; or really just access the last element of the split (and then you're right, you're removing the inner loop). But written like that this looks... weird.
Have you thought about multiple runs. I mean if there's a workflow rule/flow/process builder - you might enter this code again. And because you're overwriting the field I think it'll completely screw you over.
Map<String, String> mapping = new Map<String, String>{
'one' => '1',
'two' => '2',
'three' => '3',
'2' => 'lol'
};
String text = 'one;two';
List<String> temp = new List<String>();
for(String key : text.split(';')){
temp.add(mapping.get(key));
}
text = String.join(temp, ';');
System.debug(text); // "1;2"
// Oh noo, a workflow caused my code to run again.
// Or user edited the account.
temp = new List<String>();
for(String key : text.split(';')){
temp.add(mapping.get(key));
}
text = String.join(temp, ';');
System.debug(text); // "lol", some data was lost
// And again
temp = new List<String>();
for(String key : text.split(';')){
temp.add(mapping.get(key));
}
text = String.join(temp, ';');
System.debug(text); // "", empty
Are you even sure you need this code. Salesforce is perfectly fine with having separate picklist labels (what's visible to the user) and api values (what's saved to database, referenced in Apex, validation rules...). Maybe you don't need this transformation at all. Maybe your company should look into Translation Workbench. Or even ditch this code completely and do some search-replace before invoking data loader, in some real ETL tool (or even MS Excel)
I am trying to get the count of the search results returned in MakeMyTrip application by searching the flights from Hyderabad to Bangalore. By using the below I am able to get the text but how to verify how many number of search results returned.
String output = driver.findElement(By.xpath("//*[#id=\"left-side--wrapper\"]/div[3]")).getText();MakeMyTrip Flight Search
System.out.println(output);
Thanks in Advance
You should use driver.findElements(); method like this below:
// your webelement
By eachSearchElement = By.xpath("//*[#id='left-side--wrapper']/div[3]");
// getting all of available elements on the page and store them in List
List <WebElement> allSearchElements = driver.findElements(eachSearchElement);
// then just simply get the size of particular List above
int howManyElements = allSearchElements.size();
System.out.println("There are " + howManyElements + " present on the page");
Hope this will help.
I have issue with iTextSharp. Let's assume I have two rows of fields in PDF file (the file is given and I don't know how was created)
Row 1:
data[0].#subform[0].Tabella1[0].Riga2[0].DATA[0]
data[0].#subform[0].Tabella1[0].Riga2[0].ORAINIPM[0]
data[0].#subform[0].Tabella1[0].Riga2[0].ORAINILM[0]
data[0].#subform[0].Tabella1[0].Riga2[0].ORAENDLM[0]
data[0].#subform[0].Tabella1[0].Riga2[0].ORAENDAM[0]
data[0].#subform[0].Tabella1[0].Riga2[0].ORAINIPP[0]
data[0].#subform[0].Tabella1[0].Riga2[0].ORAINILP[0]
data[0].#subform[0].Tabella1[0].Riga2[0].ORAENDLP[0]
data[0].#subform[0].Tabella1[0].Riga2[0].ORAENDAP[0]
Row 2:
data[0].#subform[0].Tabella1[0].Riga3[0].DATA[0]
data[0].#subform[0].Tabella1[0].Riga3[0].ORAINIPM[0]
data[0].#subform[0].Tabella1[0].Riga3[0].ORAINILM[0]
data[0].#subform[0].Tabella1[0].Riga3[0].ORAENDLM[0]
data[0].#subform[0].Tabella1[0].Riga3[0].ORAENDAM[0]
data[0].#subform[0].Tabella1[0].Riga3[0].ORAINIPP[0]
data[0].#subform[0].Tabella1[0].Riga3[0].ORAINILP[0]
data[0].#subform[0].Tabella1[0].Riga3[0].ORAENDLP[0]
data[0].#subform[0].Tabella1[0].Riga3[0].ORAENDAP[0]
I read this fields using below code:
String newFile = source.Insert(source.Length - 4, "newModyfiy");
using (FileStream outFile = new FileStream(newFile, FileMode.Create))
{
PdfReader pdfReader = new PdfReader(source);
foreach (KeyValuePair<String, AcroFields.Item> kvp in pdfReader.AcroFields.Fields)
{
int fileType = pdfReader.AcroFields.GetFieldType(kvp.Key);
string filedValue = pdfReader.AcroFields.GetField(kvp.Key);
string transFileName = pdfReader.AcroFields.GetTranslatedFieldName(kvp.Key);
textBox1.Text = textBox1.Text + fileType.ToString() + " " + filedValue + " " + transFileName + Environment.NewLine;
}
pdfReader.Close();
}
I am getting for both rows values of the first row only. My target is to write values to those fields and save new file. When I use:
PdfStamper pdfStamper = new PdfStamper(pdfReader, new FileStream(newFile, FileMode.Create), '\0', true);
I always overwrite values of first row (when I try to set value in second row it appears in first). If I change the last parameter PdfStamper to false it writes fileds correctlly but file is not editable manually.
Is it a matter of pdf file? Is there a way to read and then write values to proper fileds?
I have spent on this few days and could not find reason of this strange behaviour.
Any small help or even clue will be appereciated.
Edit:
I add mentioned PDF file.
https://ufile.io/mwni5
I have deleted some object but general structure is kept.
What is the best way for searching the fields AT THE MOMENT in database with angularjs?
I have a JSON inside a field of database...
I want, when a user fill a text box (TextBox1), in exact time, it search at the moment inside my database. if that entire text was equal with any field of a table, fill another textbox(TextBox2) automatically.
With below code, i can fill it. and it works properly. but i thinks its not logical. Because it reads all of my data in mentioned Table first
$scope.showFullName = function(){
$scope.Fullname="";
if($scope.MembersNationalCode.length == 10) {
for (var i = 0; i < $scope.NaturalMembers.length; i++) {
if ($scope.NaturalMembers[i].NationalNumber == $scope.MembersNationalCode) {
$scope.Fullname = $scope.NaturalMembers[i].Name + " " + $scope.NaturalMembers[i].Family;
}
}
}
};
I'm mentioning again. It works properly. But i'm looking for best way. And this question is a sample.
Thanks.
I'm trying to find a way to count my columns coming from a Flat File. Actually, all my columns are concatened in a signe cell, sepatared with a '|' ,
after various attempts, it seems that only a script task can handle this.
Does anyone can help me upon that ? I've shamely no experience with script in C# ou VB.
Thanks a lot
Emmanuel
To better understand, below is the output of what I want to achieve to. e.g a single cell containing all headers coming from a FF. The thing is, to get to this result, I appended manually in the previous step ( derived column) all column names each others in order to concatenate them with a '|' separator.
Now , if my FF source layout changes, it won't work anymore, because of this manualy process. So I think I would have to use a script instead which basically returns my number of columns (header ) in a variable and will allow to remove the hard coded part in the derived column transfo for instance
This is an very old thread; however, I just stumbled on a similar problem. A flat file with a number of different record "formats" inside. Many different formats, not in any particular order, meaning you might have 57 fields in one line, then 59 in the next 1000, then 56 in the next 10000, back to 57... well, think you got the idea.
For lack of better ideas, I decided to break that file based on the number of commas in each line, and then import the different record types (now bunched together) using SSIS packages for each type.
So the answer for this question is there, with a bit more code to produce the files.
Hope this helps somebody with the same problem.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
namespace OddFlatFile_Transformation
{
class RedistributeLines
{
/*
* This routine opens a text file and reads it line by line
* for each line the number of "," (commas) is counted
* and then the line is written into a another text file
* based on that number of commas found
* For example if there are 15 commas in a given line
* the line is written to the WhateverFileName_15.Ext
* WhaeverFileName and Ext are the same file name and
* extension from the original file that is being read
* The application tests WhateverFileName_NN.Ext for existance
* and creates the file in case it does not exist yet
* To Better control splited records a sequential identifier,
* based on the number of lines read, is added to the beginning
* of each line written independently of the file and record number
*/
static void Main(string[] args)
{
// get full qualified file name from console
String strFileToRead;
strFileToRead = Console.ReadLine();
// create reader & open file
StreamReader srTextFileReader = new StreamReader(strFileToRead);
string strLineRead = "";
string strFileToWrite = "";
string strLineIdentifier = "";
string strLineToWrite = "";
int intCountLines = 0;
int intCountCommas = 0;
int intDotPosition = 0;
const string strZeroPadding = "00000000";
// Processing begins
Console.WriteLine("Processing begins: " + DateTime.Now);
/* Main Loop */
while (strLineRead != null)
{
// read a line of text count commas and create Linde Identifier
strLineRead = srTextFileReader.ReadLine();
if (strLineRead != null)
{
intCountLines += 1;
strLineIdentifier = strZeroPadding.Substring(0, strZeroPadding.Length - intCountLines.ToString().Length) + intCountLines;
intCountCommas = 0;
foreach (char chrEachPosition in strLineRead)
{
if (chrEachPosition == ',') intCountCommas++;
}
// Based on the number of commas determined above
// the name of the file to be writen to is established
intDotPosition = strFileToRead.IndexOf(".");
strFileToWrite = strFileToRead.Substring (0,intDotPosition) + "_";
if ( intCountCommas < 10)
{
strFileToWrite += "0" + intCountCommas;
}
else
{
strFileToWrite += intCountCommas;
}
strFileToWrite += strFileToRead.Substring(intDotPosition, (strFileToRead.Length - intDotPosition));
// Using the file name established above the line captured
// during the text read phase is written to that file
StreamWriter swTextFileWriter = new StreamWriter(strFileToWrite, true);
strLineToWrite = "[" + strLineIdentifier + "] " + strLineRead;
swTextFileWriter.WriteLine (strLineToWrite);
swTextFileWriter.Close();
Console.WriteLine(strLineIdentifier);
}
}
// close the stream
srTextFileReader.Close();
Console.WriteLine(DateTime.Now);
Console.ReadLine();
}
}
}
Please refer my answers in the following Stack Overflow questions. Those answers might give you an idea of how to load a flat file that contains varying number of columns.
Example in the following question reads a file containing data separated by special character Ç (c-cedilla). In your case, the delimiter is Vertical Bar (|)
UTF-8 flat file import to SQL Server 2008 not recognizing {LF} row delimiter
Example in the following question reads an EDI file that contains different sections with varying number of columns. The package reads the file loads it accordingly with parent-child relationships into an SQL table.
how to load a flat file with header and detail parent child relationship into SQL server
Based on the logic used in those answers, you can also count the number of columns by splitting the rows in the file by the column delimiter (Vertical Bar |).
Hope that helps.