FolderClosedIOException while reading mail with Java mail client - jakarta-mail

This is happening for some of the emails which have attachments > 2MB.
The stack trace I am getting is this:
com.sun.mail.util.FolderClosedIOException
at com.sun.mail.imap.IMAPInputStream.forceCheckExpunged(IMAPInputStream.java:107)
at com.sun.mail.imap.IMAPInputStream.fill(IMAPInputStream.java:158)
at com.sun.mail.imap.IMAPInputStream.read(IMAPInputStream.java:218)
at com.sun.mail.imap.IMAPInputStream.read(IMAPInputStream.java:244)
at com.sun.mail.imap.IMAPMessage.writeTo(IMAPMessage.java:849)
I looked at the IMAPInputStream code to see why its doing a expunged check, looks like its getting a ProtocolException.
if (peek)
b = p.peekBody(seqnum, section, pos, cnt, readbuf);
else
b = p.fetchBody(seqnum, section, pos, cnt, readbuf);
} catch (ProtocolException pex) {
forceCheckExpunged();
throw new IOException(pex.getMessage());
}
Anyone has faced this before? I am using javax.mail-1.5.6

Related

Send and receive a w3c.dom.Document over socket as byte[] Java

I send a document over socket like this:
sendFXML(asByteArray(getRequiredScene(fetchSceneRequest())));
private void sendFXML(byte[] requiredFXML) throws IOException, TransformerException {
dataOutputStream.write(requiredFXML);
dataOutputStream.flush();
}
private Document getRequiredScene(String requiredFile) throws IOException, ParserConfigurationException, SAXException, TransformerException {
return new XMLLocator().getDocumentOrReturnNull(requiredFile);
}
private String fetchSceneRequest() throws IOException, ClassNotFoundException {
return dataInputStream.readUTF();
}
On the side of XMLLocator it finds the correct document and parses it right. I see it by printing the whole doc in console.
But I cannot handle it on the clients side where it's fetch by:
public static void receivePage() throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] data = new byte[989898];
int bytesRead = -1;
while((bytesRead = dataInputStream.read(data)) != -1 ) { //stops here
baos.write(data, 0, bytesRead );
}
Files.write(Paths.get(FILE_TO_RECEIVED), data);
}
After the first iteration in while() cycle it just stops on the commented place.
I don't know if I have an error on the side of the server and I send this in doc in an incorrect format or I read the sent byte array incorrectly. Where is the problem?
Edit:
For the debug purpose, in the receivePage() method, I've chosen a different way of reading the byte array from server which goes like:
int count = inputStream.available();
byte[] b = new byte[count];
int bytes = dataInputStream.read(b);
System.out.println(bytes);
for (byte by : b) {
System.out.print((char)by);
}
And now I'm able to print fetched FXLM in console but a new problem has appeared.
On debug, it normally receives the byte[] from server, writes 2024 for count and displayes the content of the file but if I run the app normally via Shift + f10 it fetches nothing and just writes 0 in console
Edit2:
For some reason, once again, on debug, it's able to even write into a file
for (byte by : b) {
Files.write(Paths.get(FILE_TO_RECEIVED), b);
System.out.print((char)by);
}
But when I try to return this fxml on debug and then show like this:
Parent fxmlToShow = FXMLLoader.load(getClass().getResource("/network/gui.fxml"));
Scene childScene = new Scene(fxmlToShow);
Stage window = (Stage)((Node)ae.getSource()).getScene().getWindow();
window.setScene(childScene);
return window;
It shows only previous files. Like on the first attempt of debug it show a blank page when I asked for the 1st one from server. On the second attempt of debug when i ask for 3rd page from server, it shows me the previously asked one and so on.
To me, it seems absolutely insane cuz the fxml rile actually refreshes before the line
Parent fxmlToShow = FXMLLoader.load(getClass().getResource("/network/gui.fxml"));
is invoked.
Yeah, thank everybody for participating.
So, the issue of incorrect displaying if FXML files was caused by the incorrect FILE_TO_RECEIVED path.
When FXMLLoader.load(getClass().getResource("/network/gui.fxml")); loads gui.fxml it takes it not from D:\\JetBrains\\IdeaProjects\\Client\\src\\network\\gui.fxml,im my case, but from D:\\JetBrains\\IdeaProjects\\Client\\OUT\\PRODUCTION\\Client\\network\\gui.fxml.
As for me, that doesn't seem obvious.
What about different behaviour on debug and on run. In method receivePage() it needs to wait until connection is available.
int count = inputStream.available();
If you read docs for this method you will see
Returns an estimate of the number of bytes that can be read (or skipped over) from this input stream ...
The available method for class InputStream always returns 0...
So, you jext need to wait for connection to be available
while(inputStream.available()==0){
Thread.sleep(100);
}
Otherwise it just prepares byte[] b = new byte[count]; for 0 bytes and you can write in nothing.

Getting output of System.out as file

I'm new to programming and working on this project of Library System.
It uses linked lists to save details of books and members.
It has two functions that print details of all books in library and all members of library.
As I made a GUI for my project I can't print details in Frame.
So, I thought of doing it with file.
I have successfully printed everything in file but now my problem is how should I display file to user?
Like if user presses "show all books" button, it should automatically open the respective file.
I tried searching but I can't figure out what to actually search as I'm a beginner.
Any help would be much appreciated please.
UPDATE:
I tried with Desktop.getDesktop(file) and I am getting this error.
Exception in thread "AWT-EventQueue-0" java.lang.NumberFormatException: For input string: "Book issued successfully"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:589)
at java.lang.Long.parseLong(Long.java:631)
at RunProgram.stringToLong(RunProgram.java:217)
at RunProgram.actionPerformed(RunProgram.java:196)
This is my stringToLong method
private static long stringToLong(String stringObject){
return Long.parseLong(stringObject.trim());
}
This is the method for printing details
void printBooksIssued(long cpr) throws IOException{
File file = new File("d:/work/test.txt");
System.setOut(new PrintStream(new FileOutputStream(file)));
int index = 0;
if (searchMember(cpr) == -1)
System.out.println("Member doesn't exist.");
LibMember m = membersList.get(index);
while (index < sizeMembersList() ){
if (m.getCprNum() == cpr){
System.out.println(Arrays.toString(m.getBooksIssued()));
return;
}
index++;
if (index < sizeMembersList())
m = membersList.get(index);
}
Desktop.getDesktop().open(file);
}
And this is how I called this method inside actionListener method.
case "Print details of books issued to member": {
long cprNum = stringToLong(cpr.getText());
try {
ITLib.printBooksIssued(cprNum);
}
catch (IOException e1){
fName.setText("Error. Make sure CPR number is correct and try again.");
}
}
The thing I don't understand is that the first error of numberFormatException of string "Book issued successfully is completely separate from this button and textfield. Then why is it giving error with that.
case "Issue": {
long an = stringToLong(NUM.getText());
long cpr = stringToLong(CPR.getText());
if (ITLib.issueBook(an, cpr))
NUM.setText("Book issued successfully");
else
NUM.setText("Book couldn't be issued. Try again later.");
break;
}
If you are using Java and save the files with the suffix ".txt", you can use Desktop.getDesktop().
Here is an example that will popup the default system editor for text:
File file = new File("d:/work/test.txt");
System.setOut(new PrintStream(new FileOutputStream(file)));
System.out.println("Testing 1");
System.out.println("Testing 2");
Desktop.getDesktop().open(file);
On Windows this will open notepad.exe with the "text.txt" file.

Behavior of initial.min.cluster.size

Is Hazelcast always blocking in case initial.min.cluster.size is not reached? If not, under which situations is it not?
Details:
I use the following code to initialize hazelcast:
Config cfg = new Config();
cfg.setProperty("hazelcast.initial.min.cluster.size",Integer.
toString(minimumInitialMembersInHazelCluster)); //2 in this case
cfg.getGroupConfig().setName(clusterName);
NetworkConfig network = cfg.getNetworkConfig();
JoinConfig join = network.getJoin();
join.getMulticastConfig().setEnabled(false);
join.getTcpIpConfig().addMember("192.168.0.1").addMember("192.168.0.2").
addMember("192.168.0.3").addMember("192.168.0.4").
addMember("192.168.0.5").addMember("192.168.0.6").
addMember("192.168.0.7").setRequiredMember(null).setEnabled(true);
network.getInterfaces().setEnabled(true).addInterface("192.168.0.*");
join.getMulticastConfig().setMulticastTimeoutSeconds(MCSOCK_TIMEOUT/100);
hazelInst = Hazelcast.newHazelcastInstance(cfg);
distrDischargedTTGs = hazelInst.getList(clusterName);
and get log messages like
debug: starting Hazel pullExternal from Hazelcluster with 1 members.
Does that definitely mean there was another member that has joined and left already? It does not look like that would be the case from the log files of the other instance. Hence I wonder whether there are situtations where hazelInst = Hazelcast.newHazelcastInstance(cfg); does not block even though it is the only instance in the hazelcast cluster.
The newHazelcastInstance blocks till the clusters has the required number of members.
See the code below for how it is implemented:
private static void awaitMinimalClusterSize(HazelcastInstanceImpl hazelcastInstance, Node node, boolean firstMember)
throws InterruptedException {
final int initialMinClusterSize = node.groupProperties.INITIAL_MIN_CLUSTER_SIZE.getInteger();
while (node.getClusterService().getSize() < initialMinClusterSize) {
try {
hazelcastInstance.logger.info("HazelcastInstance waiting for cluster size of " + initialMinClusterSize);
//noinspection BusyWait
Thread.sleep(TimeUnit.SECONDS.toMillis(1));
} catch (InterruptedException ignored) {
}
}
if (initialMinClusterSize > 1) {
if (firstMember) {
node.partitionService.firstArrangement();
} else {
Thread.sleep(TimeUnit.SECONDS.toMillis(3));
}
hazelcastInstance.logger.info("HazelcastInstance starting after waiting for cluster size of "
+ initialMinClusterSize);
}
}
If you set the logging on debug then perhaps you can see better what is happening. Member joining and leaving should already be visible under info.

read cloud storage content with "gzip" encoding for "application/octet-stream" type content

We're using "Google Cloud Storage Client Library" for app engine, with simply "GcsFileOptions.Builder.contentEncoding("gzip")" at file creation time, we got the following problem when reading the file:
com.google.appengine.tools.cloudstorage.NonRetriableException: java.lang.RuntimeException: com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1#1c07d21: Unexpected cause of ExecutionException
at com.google.appengine.tools.cloudstorage.RetryHelper.doRetry(RetryHelper.java:87)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:129)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:123)
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl.read(SimpleGcsInputChannelImpl.java:81)
...
Caused by: java.lang.RuntimeException: com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1#1c07d21: Unexpected cause of ExecutionException
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1.call(SimpleGcsInputChannelImpl.java:101)
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1.call(SimpleGcsInputChannelImpl.java:81)
at com.google.appengine.tools.cloudstorage.RetryHelper.doRetry(RetryHelper.java:75)
... 56 more
Caused by: java.lang.IllegalStateException: com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService$2#1d8c25d: got 46483 > wanted 19823
at com.google.common.base.Preconditions.checkState(Preconditions.java:177)
at com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:418)
at com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:398)
at com.google.appengine.api.utils.FutureWrapper.wrapAndCache(FutureWrapper.java:53)
at com.google.appengine.api.utils.FutureWrapper.get(FutureWrapper.java:90)
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1.call(SimpleGcsInputChannelImpl.java:86)
... 58 more
What else should be added to read files with "gzip" compression to be able to read the content in app engine? ( curl cloud storage URL from client side works fine for both compressed and uncompressed file )
This is the code that works for uncompressed object:
byte[] blobContent = new byte[0];
try
{
GcsFileMetadata metaData = gcsService.getMetadata(fileName);
int fileSize = (int) metaData.getLength();
final int chunkSize = BlobstoreService.MAX_BLOB_FETCH_SIZE;
LOG.info("content encoding: " + metaData.getOptions().getContentEncoding()); // "gzip" here
LOG.info("input size " + fileSize); // the size is obviously the compressed size!
for (long offset = 0; offset < fileSize;)
{
if (offset != 0)
{
LOG.info("Handling extra size for " + filePath + " at " + offset);
}
final int size = Math.min(chunkSize, fileSize);
ByteBuffer result = ByteBuffer.allocate(size);
GcsInputChannel readChannel = gcsService.openReadChannel(fileName, offset);
try
{
readChannel.read(result); <<<< here the exception was thrown
}
finally
{
......
It is now compressed by:
GcsFilename filename = new GcsFilename(bucketName, filePath);
GcsFileOptions.Builder builder = new GcsFileOptions.Builder().mimeType(image_type);
builder = builder.contentEncoding("gzip");
GcsOutputChannel writeChannel = gcsService.createOrReplace(filename, builder.build());
ByteArrayOutputStream byteStream = new ByteArrayOutputStream(blob_content.length);
try
{
GZIPOutputStream zipStream = new GZIPOutputStream(byteStream);
try
{
zipStream.write(blob_content);
}
finally
{
zipStream.close();
}
}
finally
{
byteStream.close();
}
byte[] compressedData = byteStream.toByteArray();
writeChannel.write(ByteBuffer.wrap(compressedData));
the blob_content is compressed from 46483 bytes to 19823 bytes.
I think it is the google code's bug
https://code.google.com/p/appengine-gcs-client/source/browse/trunk/java/src/main/java/com/google/appengine/tools/cloudstorage/oauth/OauthRawGcsService.java, L418:
Preconditions.checkState(content.length <= want, "%s: got %s > wanted %s", this, content.length, want);
the HTTPResponse has decoded the blob, so the Precondition is wrong here.
If I good understand you have to set mineType:
GcsFileOptions options = new GcsFileOptions.Builder().mimeType("text/html")
Google Cloud Storage does not compress or decompress objects:
https://developers.google.com/storage/docs/reference-headers?csw=1#contentencoding
I hope that's what you want to do .
Looking at your code it seems like there is a mismatch between what is stored and what is read. The documentation specifies that compression is not done for you (https://developers.google.com/storage/docs/reference-headers?csw=1#contentencoding). You will need to do the actual compression manually.
Also if you look at the implementation of the class that throws the exception (https://code.google.com/p/appengine-gcs-client/source/browse/trunk/java/src/main/java/com/google/appengine/tools/cloudstorage/oauth/OauthRawGcsService.java?r=81&spec=svn134) you will notice that you get the original contents back but you're actually expecting compressed content. Check the method readObjectAsync in the above mentioned class.
It looks like the content persisted might not be gzipped or the content-length is not set properly. What you should do is verify length of the compressed stream just before writing it into the channel. You should also verify that the content length is set correctly when doing the http request. It would be useful to see the actual http request headers and make sure that content length header matches the actual content length in the http response.
Also it looks like contentEncoding could be set incorrectly. Try using:.contentEncoding("Content-Encoding: gzip") as used in this TCK test. Although still the best thing to do is inspect the HTTP request and response. You can use wireshark to do that easily.
Also you need to make sure that GCSOutputChannel is closed as that's when the file is finalized.
Hope this puts you on the right track. To gzip your contents you can use java GZIPInputStream.
I'm seeing the same issue, easily reproducable by uploading a file with "gsutil cp -Z", then trying to open it with the following
ByteArrayOutputStream output = new ByteArrayOutputStream();
try (GcsInputChannel readChannel = svc.openReadChannel(filename, 0)) {
try (InputStream input = Channels.newInputStream(readChannel))
{
IOUtils.copy(input, output);
}
}
This causes an exception like this:
java.lang.IllegalStateException:
....oauth.OauthRawGcsService$2#1883798: got 64303 > wanted 4096
at ....Preconditions.checkState(Preconditions.java:199)
at ....oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:519)
at ....oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:499)
The only work around I've found is to read the entire file into memory using readChannel.read:
int fileSize = 64303;
ByteBuffer result = ByteBuffer.allocate(fileSize);
try (GcsInputChannel readChannel = gcs.openReadChannel(new GcsFilename("mybucket", "mygzippedfile.xml"), 0)) {
readChannel.read(result);
}
Unfortunately, this only works if the size of the bytebuffer is greater or equal to the uncompressed size of the file, which is not possible to get via the api.
I've also posted my comment to an issue registered with google: https://code.google.com/p/googleappengine/issues/detail?id=10445
This is my function for reading compressed gzip files
public byte[] getUpdate(String fileName) throws IOException
{
GcsFilename fileNameObj = new GcsFilename(defaultBucketName, fileName);
try (GcsInputChannel readChannel = gcsService.openReadChannel(fileNameObj, 0))
{
maxSizeBuffer.clear();
readChannel.read(maxSizeBuffer);
}
byte[] result = maxSizeBuffer.array();
return result;
}
The core is that you cannot use the size of the saved file cause Google Storage will give it to you with the original size, so it checks the sizes you expected and the real size and these are differents:
Preconditions.checkState(content.length <= want, "%s: got %s > wanted
%s", this, content.length, want);
So i solved it allocating the biggest amount possible for these files using BlobstoreService.MAX_BLOB_FETCH_SIZE. Actually maxSizeBuffer is only allocated once outsize of the function
ByteBuffer maxSizeBuffer = ByteBuffer.allocate(BlobstoreService.MAX_BLOB_FETCH_SIZE);
And with maxSizeBuffer.clear(); all data is flushed again.

Database problem: how to diagnose and fix the problem?

I created an application which stores values into the database and retrieves the stored data. While running an application in run mode everything seems to work fine (the values are stored and retrieved successfully) but when I run in the debug mode the process throws IllegalStateException and so far haven't found a cause.
The method which retrieves an object Recording is the following:
public Recording getRecording(String filename) {
Recording recording = null;
String where = RECORDING_FILENAME + "='" + filename + "'";
Log.v(TAG, "retrieving recording: filename = " + filename);
try {
cursor = db.query(DATABASE_TABLE_RECORDINGS, new String[]{RECORDING_FILENAME, RECORDING_TITLE, RECORDING_TAGS, RECORDING_PRIVACY_LEVEL, RECORDING_LOCATION, RECORDING_GEO_TAGS, RECORDING_GEO_TAGGING_ENABLED, RECORDING_TIME_SECONDS, RECORDING_SELECTED_COMMUNITY}, where, null, null, null, null);
if (cursor.getCount() > 0) {
cursor.moveToFirst();
//String filename = c.getString(0);
String title = cursor.getString(1);
String tags = cursor.getString(2);
int privacyLevel = cursor.getInt(3);
String location = cursor.getString(4);
String geoTags = cursor.getString(5);
int iGeoTaggingEnabled = cursor.getInt(6);
String recordingTime = cursor.getString(7);
String communityID = cursor.getString(8);
cursor.close();
recording = new Recording(filename, title, tags, privacyLevel, location, geoTags, iGeoTaggingEnabled, recordingTime, communityID);
}
}
catch (SQLException e) {
String msg = e.getMessage();
Log.w(TAG, msg);
recording = null;
}
return recording;
}
and it is called from another class (Settings):
private Recording getRecording(String filename) {
dbAdapter = dbAdapter.open();
Recording recording = dbAdapter.getRecording(filename);
dbAdapter.close();
return recording;
}
While running through the code above everything works fine but then I notice an exception in another thread:
alt text http://img509.imageshack.us/img509/862/illegalstateexception.jpg
and don't know neither what is the possible cause of this exception nor how to debug the code from that thread to diagnose the cause.
I would be very thankful if anyone knew what is the possible issue here.
Thank you!
Looks like that cursor.close() is inside an "if" - that's when SQLiteCursor.finalize() will throw an IllegalStateException (I googled for it). You migh be getting an empty recordset, for instance, if some other process/thread didn't have the time to commit.
Close it always, even if it's result set is empty.
I'd also advice you to access fields by names, not indices, for future compatibility. And do both close()s in finally{} blocks.

Resources