Create PDF file from Text String or HTML String - codenameone

My codenameone-app is producing some data, which I like to be able to summarize in a PDF-file for documentation purposes.
Would it be possible to either use a java library as cn1 library or to use a webservice which converts an HTML String into a PDF file like this:?
https://www.html2pdfrocket.com/convert-android-html-to-pdf
Maybe someone else already figured out a best-practice for this.
Thanks a lot!

There is no current builtin solution for that, it should be easy enough to wrap native libs or maybe even port a JavaSE lib that does that. Most of the developers who do something like this use a server side process to generate the PDF.

After giving it a quick-and-dirty try with html2pdfrocket - instead of using or porting a java library - I was simply amazed by the simplicity of the possibility with codenameone. I wasn't expecting this to be so easy AT ALL.
This class and method is all you'd need to simply save the pdf file to FileSystemStorage.
import com.codename1.io.Util;
public class PDFHandler {
private final static String URL="http://api.html2pdfrocket.com/pdf";
private final static String APIKEY = "<YOURAPI-KEY>";
/**
* Stores given HTML String or URL to Storage with given filename
* #param value URL or HTML add quote if you have spaces. use single quotes instead of double
* #param filename
*/
public void getFile(String value,String filename){
// Validate parameters
if(value==null||value.length()<1)
return;
if(filename==null||filename.length()<1)
return;
//Encode
value = Util.encodeUrl(value);
String fullPathToFile = FileSystemStorage.getInstance().getAppHomePath()+filename;
Util.downloadUrlToFileSystemInBackground(URL+"?apikey="+APIKEY+"&value="+value, fullPathToFile);
}
}
I hope this helps some other codenameone-newbie!

Related

How to generate dynamic path in dataset during the output method

Is there a way to create a dynamic DataSink output path in Flink?
DataSet has data type as Tuple2<String, String>
When we tried using stream I had a way to generate dynamic bath using custom Bucketer like below
#Override
public Path getBucketPath(Clock clock, Path basePath, Tuple2<String, String> element) {
return new Path(basePath + "/schema=" + element.f0.toLowerCase().trim() + "/");
}
I would like to know is there a similar way to handle in DataSet for generating the custom path.
I poked around a bit, and didn't find anything similar for batch processing. Which means I think you'd have to create your own OutputFormat class that wraps a regular FileOutputFormat and does bucketing, using the same Bucketer interface.

Save result from Objectify in human readable form in datastore

I am trying to create an Eventlog (ORMSLOG in example), that saves events in human readable form in Datastore.
Doing this should write readable event:
List<Device> devices = ofy().transactionless().load().type(Device.class).list();
ORMSLOG.log(ORMSLOG.GET_ALL_DEVICES, "Devices found: " + String.valueOf(devices));
The ORMSLOG is a simple class.
public class ORMSLOG {
public final static String CREATE_DEVICE = "Create Device";
public final static String GET_ALL_DEVICES = "Get all Devices";
public static void log(final String event, final String data) {
ofy().save().entity(new Event(event, data)).now();
}
}
But the data saved in Datastore is not readable and looks like this:
ORMSLOG data
I need to transform the reference to the object into human readable text.
You are just logging the String representation of the objects, which is done by calling the toString method. Since you did not override the toString method in the Device class, you are getting the pointer to the objects. If you override the toString method in your Device class to return whatever state you want to return, you would see a much better result. Most IDEs (e.g. Eclipse) have an option to generate toString method for you.

Creating & Setting a Map into context through SpringEl

As SpringEl doc. indicates, there is el syntax for creating a list which then allows me setting it into the context as below:
List numbers = (List) parser.parseExpression("map['innermap']['newProperty']={1,2,3,4}").getValue(context);
However, I am not able to find a way of doing the same thing for Map nor I can find it in the document.
Is there a short hand way of creating a map and then setting it to context? if not, how can we go about it.
If possible a code snippet will be helpful.
Thanks in advance.
It's now possible (since 4.1, I think):
{key:value, key:value}
http://docs.spring.io/spring/docs/current/spring-framework-reference/html/expressions.html#expressions-inline-maps
No, it isn't possible yet: https://jira.spring.io/browse/SPR-9472
But you can do it with some util method, which should be registered as SpEL-function:
parser.parseExpression("#inlineMap('key1: value1, key2:' + value2)");
Where you have to parse the String arg to the Map.
UPDATE
Please, read this paragraph: http://docs.spring.io/spring/docs/current/spring-framework-reference/html/expressions.html#expressions-ref-functions.
From big height it should be like this:
public abstract class StringUtils {
public static Map<String, Object> inlineMap(String input) {
// Here is a code to parse 'input' string and build a Map
}
}
context.registerFunction("inlineMap",
StringUtils.class.getDeclaredMethod("inlineMap", new Class[] { String.class }));
parser.parseExpression("#inlineMap('key1: value1, key2:' + value2)")
.getValue(context, rootObject);

Training own model in opennlp

I am finding it difficult to create my own model openNLP.
Can any one tell me, how to own model.
How the training shouls be done.
What should be the input and where the output model file will get stored.
https://opennlp.apache.org/docs/1.5.3/manual/opennlp.html
This website is very useful, shows both in code, and using the OpenNLP application to train models for all different types, like entity extraction and part of speech etc.
I could give you some code examples in here, but the page is very clear to use.
Theory-wise:
Essentially you create a file which lists the stuff you want to train
eg.
Sport [whitespace] this is a page about football, rugby and stuff
Politics [whitespace] this is a page about tony blair being prime minister.
The format is described on the page above (each model expects a different format). once you have created this file, you run it through either the API or the opennlp application (via command line), and it generates a .bin file. Once you have this .bin file, you can load it into a model, and start using it (as per the api in the above website).
First you need to train the data with the required Entity.
Sentences should be separated with new line character (\n). Values should be separated from and tags with a space character.
Let's say you want to create medicine entity model, so data should be something like this:
<START:medicine> Augmentin-Duo <END> is a penicillin antibiotic that contains two medicines - <START:medicine> amoxicillin trihydrate <END> and
<START:medicine> potassium clavulanate <END>. They work together to kill certain types of bacteria and are used to treat certain types of bacterial infections.
You can refer a sample dataset for example. Training data should have at least 15000 sentences to get the better results.
Further you can use Opennlp TokenNameFinderTrainer.
Output file will be in the .bin format.
Here is the example: Writing a custom NameFinder model in OpenNLP
For more details, refer the Opennlp documentation
Perhaps this article will help you out. It describes how to do TokenNameFinder training from data extracted from Wikipedia...
nuxeo - blog - Mining Wikipedia with Hadoop and Pig for Natural Language Processing
Copy the data in data and run below code to get your own mymodel.bin .
Can refer for data=https://github.com/mccraigmccraig/opennlp/blob/master/src/test/resources/opennlp/tools/namefind/AnnotatedSentencesWithTypes.txt
public class Training {
static String onlpModelPath = "mymodel.bin";
// training data set
static String trainingDataFilePath = "data.txt";
public static void main(String[] args) throws IOException {
Charset charset = Charset.forName("UTF-8");
ObjectStream<String> lineStream = new PlainTextByLineStream(
new FileInputStream(trainingDataFilePath), charset);
ObjectStream<NameSample> sampleStream = new NameSampleDataStream(
lineStream);
TokenNameFinderModel model = null;
HashMap<String, Object> mp = new HashMap<String, Object>();
try {
// model = NameFinderME.train("en","drugs", sampleStream, Collections.<String,Object>emptyMap(),100,4) ;
model= NameFinderME.train("en", "drugs", sampleStream, Collections. emptyMap());
} finally {
sampleStream.close();
}
BufferedOutputStream modelOut = null;
try {
modelOut = new BufferedOutputStream(new FileOutputStream(onlpModelPath));
model.serialize(modelOut);
} finally {
if (modelOut != null)
modelOut.close();
}
}
}

Getting column length from Hibernate mappings?

To validate data I am receiving I need to make sure that the length is not going to exceeded a database column length. Now all the length information is stored in the Hibernate mapping files, is there anyway to access this information programmatically?
You can get to it but it's not easy. You might want to do something like below at startup and store a static cache of the values. There are a lot of special cases to deal with (inheritance, etc), but it should work for simple single-column mappings. I might have left out some instanceof and null checks.
for (Iterator iter=configuration.getClassMappings(); iter.hasNext();) {
PersistentClass persistentClass = (PersistentClass)iter.next();
for (Iterator iter2=persistentClass.getPropertyIterator(); iter2.hasNext();) {
Property property = (Property)iter2.next();
String class = persistentClass.getClassName();
String attribute = property.getName();
int length = ((Column)property.getColumnIterator().next()).getLength();
}
}
Based on Brian's answer, this is what I ended up doing.
private static final Configuration configuration = new Configuration().configure();
public static int getColumnLength(String className, String propertyName) {
PersistentClass persistentClass = configuration.getClassMapping(className);
Property property = persistentClass.getProperty(propertyName);
int length = ((Column) property.getColumnIterator().next()).getLength();
return length;
}
This appears to be working well. Hope this is helpful to anyone who stumbles upon this question.
My preferred development pattern is to base the column length on a constant, which can be easily referenced:
class MyEntity {
public static final int MY_FIELD_LENGTH = 500;
#Column(length = MY_FIELD_LENGTH)
String myField;
...
}
Sometimes it may be problem to get the Configuration object (if you are using some application framework and you are not creating session factory by yourself using the Configuration).
If you are using for example Spring, you can use the LocalSessionFactoryBean (from your applicationContext) to obtain Configuration object. Then obtaining of column length is just piece of cake ;)
factoryBean.getConfiguration().getClassMapping(String entityName) .getTable().getColumn(Column col).getLength()
However, when I try to access the LocalSessionFactoryBean, I take a class cast exception
LocalSessionFactoryBean factoryBean = (LocalSessionFactoryBean) WebHelper.instance().getBean("sessionFactory");
exception:
org.hibernate.impl.SessionFactoryImpl cannot be cast to org.springframework.orm.hibernate3.LocalSessionFactoryBean
<bean id="sessionFactory"
class="org.springframework.orm.hibernate3.LocalSessionFactoryBean>
This seems devious....
EDIT: found the answer. You need to use an ampersand in front of the bean name string
LocalSessionFactoryBean factoryBean = (LocalSessionFactoryBean) WebHelper.instance().getBean("&sessionFactory");
see this Spring forum post

Resources