How to change Filenet document MimeType using com.filenet.wcm.api - mime-types

I am new to FileNet. We are using P8 Content Engine - 5.1.0.2
I need to change MimeType for existing document using Filenet WCM API. Workuround is to download the document, change the MimeType and re-upload the document but in this case the documnet Id will be changed. I prefer to update existing document instead of re-uploading the document.
Basically I need to do same thing that described in Changing the content element MIME type programmatically throught Filenet WCM API.
the code is
public boolean changeDocumnetMimeType(String documentId, String docMimeType) throws IOException {
com.filenet.wcm.api.TransportInputStream in1 = null;
com.filenet.wcm.api.ObjectStore docObjectStore;
com.filenet.wcm.api.Session session;
try {
session = ObjectFactory.getSession(this.applicationId, null, this.user,this.password);
session.setRemoteServerUrl(this.remoteServerUrl);
session.setRemoteServerUploadUrl(this.remoteServerUploadUrl);
session.setRemoteServerDownloadUrl(this.remoteServerDownloadUrl);
docObjectStore = ObjectFactory.getObjectStore(this.objectStoreName, session);
Document doc = (Document) docObjectStore.getObject(BaseObject.TYPE_DOCUMENT, documentId);
in1 = doc.getContent();
System.out.println("documnet MIME type is : " + in1.getMimeType());
//how to Update mimeType for the document???
} catch (Exception ex) {
ex.printStackTrace();
}
if (in1 != null) {
in1.close();
}
return true;
}
Thank you in advance.

FileNet is an EDMS system that structures it's records in a OOP fashion.
FileNet Document objects are instantiated from the FileNet Document Class.
Regardless of the API used, FileNet will not allow an update to occur on MimeType.
This is a constraint of the MimeType property.
IBM FileNet MimeType Properties
The link above defines the MimeType property, and displays its contraints:
The key point here is : Settability: SETTABLE_ONLY_BEFORE_CHECKIN
This means that the MimeType property can only be set during the RESERVATION state of a Versionable object. Non-Versionable objects (like Annotations) are not able to have this constraint.

Related

How to open/edit documents stored in SharePoint from wpf/remote application?

I have below mentioned approach to open/edit the documents which are stored in SharePoint, but none of these approaches gives an option to pass authentication token if I have, so that SharePoint does not prompt for credentials to log in.
Using process start method
system.Diagnostics.Process.Start(documentUrl);
Using this potion document always gets opened in read-only mode.
If i create ProcessStartInfo object with username and domain and password properties set and pass this object to Process.Start method it always fails with exception saying file not found
Using interop assemblies
Document Doc = wordApplication.Documents.Open()
In this case document save and update to SharePoint needs to be handled explicitly.
No way to pass authentication token
Using open document activex control of office
Type t = null;
t = Type.GetTypeFromProgID("SharePoint.OpenDocuments.1");
if (t == null)
{
Type.GetTypeFromProgID("SharePoint.OpenDocuments.2");
}
if (t == null)
{
t = Type.GetTypeFromProgID("SharePoint.OpenDocuments.3");
}
Object o = Activator.CreateInstance(t);
object[] parms = { documentUrl, string.Empty };
t.InvokeMember("EditDocument", System.Reflection.BindingFlags.InvokeMethod | System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.Instance, null, o, openParms);
This approach does not have a way to pass authentication information.
Any useful info or pointers to solve this will be very helpful.

Solr - Multiple attachments under one Data Import Handler record

I'm using Data Import Handler (DIH) to create documents in solr. Each document will have zero or more attachments. The attachments' (e.g. PDFs, Word docs, etc.) content is parsed (via Tika) and stored along with a path to the attachment. The attachment's content (and path) is (are) not stored in the database (and I prefer not to do that).
I currently have a schema with all the fields needed by DIH. I then also added an attachmentContent and attachmentPath field as multiValued. However, when I use Solrj to add the documents, only one attachment (the last one added) is stored and indexed by solr. Here's the code:
ContentStreamUpdateRequest up = new ContentStreamUpdateRequest("/update/extract");
up.setParam("literal.id", id);
for (MultipartFile file : files) {
// skip over files where the client didn't provided a filename
if (file.getOriginalFilename().equals("")) {
continue;
}
File destFile = new File(destPath, file.getOriginalFilename());
try {
file.transferTo(destFile);
up.setParam("literal.attachmentPath", documentWebPath + acquisition.getId() + "/" + file.getOriginalFilename());
up.addFile(destFile);
} catch (IOException ioe) {
ioe.printStackTrace();
}
}
try {
up.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true);
solrServer.request(up);
} catch (SolrServerException sse) {
sse.printStackTrace();
}catch (IOException ioe) {
ioe.printStackTrace();
}
How can I get multiple attachments (content and paths) to be stored by solr? Or is there a better way to accomplish this?
Solr has a limitation of having only one document indexed with the API.
If you want to have multiple documents indexed you can club them as a zip file (and apply patch) and have it indexed.

One To One Relationship in JPA (AppEngine)

In my Profile class I have
#OneToOne(cascade=CascadeType.PERSIST)
private ProfilePicture profilePic = null;
My method in updating the profilePic
public Profile updateUserProfilePic(Profile user) {
EntityManager em = EMF.get().createEntityManager();
em.getTransaction().begin();
Profile userx = em.find(Profile.class, user.getEmailAddress());
userx.setProfilePic( user.getProfilePic() );
em.getTransaction().commit();
em.close();
return userx;
}
When updateUserProfilePic is called, it just add another profilePic in datastore, it doesn't replaced the existing profilePic. Is my implementation correct? I want to update the profilePic of profile.
"Transient" means not persistent and not detached.
Using that version of GAE JPA you need a detached or managed object there if you want it to reuse the existing object.
Using v2 of Googles plugin there is a persistence property that allows merge of a transient object that has "id" fields set.

Using BlobRequest.CopyFrom fails with 404 Not Found error

Hope you can help.
I'm trying to copy a blob using the Protocol namespace along with a shared access signature, but the WebResponse always throws a 404 Not Found error. I have successfully used the Get/Post/Delete/List methods (where the 404 would be thrown if the permissions were insufficient), but I cannot find the answer here.
Here's some simple code that I am using:
Uri uriFrom = new Uri("file://mymachine/myfile.txt");
Uri uriTo = new Uri("file://mymachine/myfile1.txt");
//get shared access signature - set all permissions for now
uriTo = GetSharedAccessSignature(uriTo, SharedAccessPermissions.Write |
SharedAccessPermissions.Read | SharedAccessPermissions.List);
//NOTE: This returns my uriTo object in the following format:
//http://mystoragespace.blob.core.windows.net/mycontainer/steve1.txt?se=2011-07-04T12:17:18Z&sr=b&sp=rwdl&sig=sxhGBkbDJpe9qn5d9AB7/d2LK1aun/2s5Bq8LAy8mis=
//get the account name
string accountName = uriTo.Host.Replace(".blob.core.windows.net", string.Empty);
//build the canonical string
StringBuilder canonicalName = new StringBuilder();
canonicalName.AppendFormat(System.Globalization.CultureInfo.InvariantCulture,
"/{0}/mycontainer{1}", accountName, uriFrom.AbsolutePath);
//NOTE: my canonical string is now "/mystoragespace/mycontainer/myfile.txt"
//get the request
var request = BlobRequest.CopyFrom(uriTo, 300, canonicalName.ToString(),
null, ConditionHeaderKind.None, null, null);
request.Proxy.Credentials = CredentialCache.DefaultNetworkCredentials;
//perform the copy operation
using (HttpWebResponse response = request.GetResponse() as HttpWebResponse)
{
//do nothing. the file has been copied
}
So, my uriTo seems to have the appropriate permissions (I've tried various combinations) and the canonical string seems to have the correct source string. I'm not using snapshot functionality. The proxy isn't a problem as I've successfully used other methods.
Hope someone can help...
Many regards,
Steve
From Creating a Shared Access Signature:
The following table details which operations are allowed on a resource for a given set of permissions.
...
Create or update the content, block list, properties, and metadata of the specified blob. Note that copying a blob is not supported.

Viewing an XPS document with malformed URI's in WPF

I'm trying to use DocumentViewer (or, more specifically, DocumentViewer's DocumentPageView) to load a presentation that was saved from Powerpoint as XPS.
However, the author of the slides was being clever and entered one of his URLs as a pseudo-regex (e.g. http://[blog|www]mywebsite.com). The built in XPS Viewer is able to load the document without a problem. However, DocumentViewer throws an exception because it tries to validate the URI:
Failed to create a 'NavigateUri' from the text 'http://[blog|www]mywebsite.com'
I could of course go into the slide and fix the URI so that the document displays. However, since I can't control the documents that will be used with my application, I'd prefer to find a way to display the document in spite of invalid URI's (like XPS Viewer).
Any thoughts?
The DocumentViewer is trying to create a Uri instance from the provided URL. If the URL isn't valid, the operation will fail.
You can prevent this from happening by performing validation on the URLs provided to you by the author.
(writing this without testing, so there may be some syntax errors)
public static bool IsValidUrl(this string url)
{
if(string.IsNullOrWhitespace(url) return false;
try
{
var uri = new Url(url);
return true;
}
catch
{
// if you were implementing IDataErrorInfo rather than using a
// lousy extension method you would catch the exception
// here and display it to the user
return false;
}
}

Resources