The company that I work for has come across a pretty significant issue with one of our releases that has brought our project to a screeching halt.
A third party application that we manage, generates word documents from base64 encoded strings stored in our SQL Server. The issue that we are having is that in some cases, when one of these documents are sent via SMTP and the file is opened by the user, the file fails to open.
When the file fails, the server locks up. The memory and cpu then grow exponentially on the server to the point that the only option is to kill the process from the server-side in order to prevent failure and down time for the rest of the users on the network.
We are using Windows 7 with Microsoft Office 2013 and the latest version of SQL Server.
What is apparent is that the word document created from the base64 string is corrupt. What isn't apparent is how this appears to bring the entire server system down in one fell swoop.
Has anyone come across this issue before and if so, what was the solution that you came up with? We do not have access to the binaries of the 3rd party application that generates the files. We aren't able to reproduce the issue manually in order to come up with a working testcase to present to the 3rd party, so we are stumped. Any ideas?
I would need more details to understand your scenario. You say this is the order of events:
1. Word file is sent via SMTP (presumably an email to an Outlook client)
2. User receives email; opens attached file
3. Memory and CPU on server go to 100 percent. This creates downtime for rest of the users.
4. Need to kill this process to recover.
Since Outlook is a client-side application, it must be the Word document attachment that is causing this problem. Can you post a sample document in a public place, like a free OneDrive account? Presumably this document creates the problem. Maybe it has some VBA code? Try this with a blank document.
Related
I've been looking for information for a long time and I can't get it. I'm starting to think it can't be done if the .parquet are in Azure DataLake Storage.
I have a folder with subfolders in Azure DataLake Storage. In these subfolders there are many .parquet. I manage to get them out using ListAzureDataLakeStorage + FetchAzureDataLakeStorage combination. Then I try to pass them through a PutDatabaseRecord (which I think is the correct processor for the dump in the DB).
I think I have the PutDatabaseRecord well configured. But when executing it gives me an error: "Failed to process session due to Failed to process StandardFlowFileRecord due to java.lang.NullPointerException: Name is null".
I'm not sure I'm using the PutDatabaseRecord right. I thought that PutDatabaseRecord read the flowfiles that came to it interpreting their content as .parquet (it is supposed to use a ParquetReader as a RecordReader), being able to understand the data as records. But it surprises me that it is not necessary to indicate how to interpret the .parquet, nor how to map its columns with those of the DB table. It still doesn't work as I think and it needs the flowfile content to already arrive as records?
The truth is that I can't explain myself better either because I don't really understand what is considered a record in Nifi or how a record is related to a reading of a .parquet.
Either I am missing a processor or something I am configuring wrong. But the only thing I find is the FetchParquet, which seems to be able to read a .parquet and put it into the FlowFile as records. However, it can only be used with ListHDFS or ListFile, which do not allow me to fetch data from Azure Data Lake Storage
After several tests (using the ConvertRecord and QueryRecord processors), I have come to the conclusion that the problem is in the reading that the ParquetReader does of the content of the FlowFiles that arrive. Well, every processor that needs a ParquetReader gives the same error. Downloading the content of the FlowFile that enters the processor that the ParquetReader uses (whatever it is) and using a .parquet viewer I have verified that this content is fine.
Without knowing what to do, I have attached a screenshot of the specific error. I still don't know what "Name" the error refers to.
Error Name is null
Note: I also posted my problem on Cloudera, perhaps better explained. I leave the link in case someone wants to look at it. (https://community.cloudera.com/t5/Support-Questions/How-can-I-dump-the-parquet-data-that-is-in-Azure/td-p/316020)
In the end, the closest thing to the error I was getting was found here (https://issues.apache.org/jira/browse/NIFI-7817). It seems that it is an error related to the creation of the ParquetReader. This makes sense because it would hit any processor that used a ParquetReader. In addition, the FlowFiles did not even enter the processor that used it.
I was using Nifi version 1.12.1. I have downloaded version 1.13.2 and it no longer gives the Name error. In addition, it is seen that the Flow Files already enter the processor. On the download page of the new version (https://nifi.apache.org/download.html) you can access the Release Notes and the Migration Guidance to know what has been fixed with respect to previous versions and with which processors you have to be careful when migrating.
However, even though the data goes into the processor, it still gives me an error, but it is different and I will open it in another post.
Hi Im a newbie in stackoverflow!
As mentioned on the question title, I've been storing the email's image path into the db via localhost. Once the email is sent and received, my outlook automatically block the image download and I will need to manually download it (Not a big issue here).
Then I started to wonder what if my website/server is down? If it is down, the email will not be able to locate and download the image at all. So I'm wondering if there is any alternative ways to display the image without worrying bout the availability of my server.
Thanks in advance for any incoming advises/replies!
Since your primary concern seems to be about failure mitigation and not actual coding, I'll direct you to this question.
Your current method isn't actually embedding the images and is making a link. What you want to do is add the images as linked resources. This WILL make your emails larger in size and slower to send, but as long as you aren't a spammer, you should be OK.
Alternatively, you could have an enterprise level failure plan where your server would go offline and a mirrored server in a different location would begin serving up the data/images.
I have a page on an intraweb (that I didn't create) which allows a user to specify a .txt file and then it writes the results of a SQL stored procedure to the file using StreamWriter.
It apparently stopped working for some of my workstations several months ago, so I can't trace it to any specific changes (However, I know the code itself didn't change).
If I access & use the page on the server (where the wwwroot and applicable database are located), it successfully writes the .txt, whether I specified a local file or on a workstation on the network. Users on some workstations,though, are no longer able to write to a file.
(It is also not just writing a blank file. The "Date Modified" remains unchanged.)
The problem seems to be machine-related rather than user-related, as I can login as the same user on different workstations with different results.
I still think it may have something to do with permissions, so I created a .txt on a problem workstation with every possible account having full permissions, but no luck. Permissions on the database, stored procedure, and folder destination seem correct.
Any suggestions welcome, Thanks.
You mean to tell us that the page completes with success, your calls to StreamWriter all succeed, and yet in the end there is no file? I find that really hard to digest. A much more likely hypothesis is that the page fails and exception is thrown. Such an exception would be logged normally in the system event log.
From the description of your symptoms the issue could be a constrained delegation scenario: the page is impersonating the IE user and it cannot flow the credentials to whe accessing the network resource.
It turned out to be the IE security setting "Include local directory when uploading files to a server". This setting is disabled by default.
The working PCs had the setting enabled for some reason. Adding the site as a "Trusted Site" also enables the setting, achieving the same result.
I am looking at writing a silverlight app that I plan to use OOB setting to enable use on both PC and mac.
I have been doing a little investagation on the isolationstoragefile and what I understand is it will work for both pc and mac without a problem.....Is that correct?
The application I am building is going to be a business application that will submit details back to the main database if there is an available connection. If not then I want to store the information locally until there is an available connection.
My question is lets say I have 3 user accounts using the same machine. Can I have the isolationfile stored in the same place? or must it be under the user profile?
I don't want to have orphaned records which I could see happening if the data is stored on each user's profile.
Any advise would be great!
I understand is it will work for both pc and mac without a problem.
That is correct. You don't need to worry about the mechanics of how it is persisted to disk.
I have 3 user accounts using the same machine. Can I have the isolationfile stored in the same place? or must it be under the user profile?
IS is located under the user profile. In a full trust (elevated) OOB app you may be able to store files elsewhere on the file system by using the FileSystemObject or by using some COM interop, but there is no guarantee that you can get to that file again (NOTE: i haven't played with saving files external to IS, so may be wrong/misinformed on this). If you can whack files out to anywhere on the file system you should be very careful doing it - what if you are running on a Mac?
I don't want to have orphaned records which I could see happening if the data is stored on each user's profile.
If you mean data may be stored locally because of no connection, then that user logs off and never logs back in again to that machine so their data never syncs to the server, then yes that is a possibility. Having a service monitoring for saved data files would be ideal, but you can't do that under SL. To completely eliminate that issue may take a change in your product, like writing it as a WPF client instead of SL.
I'm working on a rather large classic asp / SQL Server application.
A new version was rolled out a few months ago with a lot of new features, and I must have a very nasty bug somewhere : some very basic pages randomly take a very long time to execute.
A few clues :
It isn't the database : when I run the query profiler, it doesn't detect any long running query
When I launch IIS Diagnostic tools, reqviewer shows that the request is in state "processing"
This can happen on ANY page
I can't reproduce it easily, it's completely random.
To have an idea of "a very long time" : this morning I had a page take more than 5 minutes to execute, when it normaly should be returned to the client in less than 100 ms.
The application can handle rather large upload and download of files (up to 2 gb in size). This is also handled with a classic asp script, using SoftArtisan FileUp. Don't think it can cause the problem though, we've had these uploads for quite a while now.
I've had the problem on two separate servers (in two separate locations, with different sets of data). One is running the application with good ol' SQL Server 2000 and the other runs SQL Server 2005. The web server is IIS 6 in both cases.
Any idea what the problem is or on how to solve that kind of problem ?
Thanks.
Sebastien
Edit :
The problem came from memory fragmentation. Some asp pages were used to download files from the server. File sizes could go from a few kb to more than 2 gb. These variations in size induced memory fragmentation. The asp pages could also take quite some time to execute (the time for the user to download the pages minus what is put in cache at IIS's level), which is not really standard for server pages that should execute quickly.
This is what I did to improve things :
Put all the download logic in a single asp page with session turned off
That allowed me to put that asp page in a specific pool that could be recycled every so often (download would now disturb the rest of the application no more)
Turn on LFH (Low Fragmention Heap), which is not by default on Windows 2003, in order to reduce memory fragmentation
References for LFH :
http://msdn.microsoft.com/en-us/library/aa366750(v=vs.85).aspx
Link (there is a dll there that you can use to turn on LFH, but the article is in French. You'll have to learn our beautiful language now!)
I noticed the same thing on a classic ASP + ajax application that I worked on. Using Timer, I timed the page load to be 153 milliseconds but in the firebug waterfall chart it randomly says 3.5 seconds. The Timer output is on the response and the waterfall chart claims that it's Firefox waiting for a response from the server. Because the waterfall chart also shows the response, I can compare the waterfall chart to the timer and there's a huge discrepancy 'every so often'
Can you establish whether this is a problem for all pages or a common subset of pages?
If a subset examine what these pages have in common, for example they all use a specific COM dll, that other pages don't.
Does this problem affect multiple clients or just a few?
IOW is there an issue with a specific browser OS version.
Is this public or intranet?
Can you reproduce the problem from a client you own?
Is there any chance there are some full-text search queries going on SQL Server?
Because if so, and if SQL Server has no access to internet, it may cause a 45-second delay every few hours or so when it tries to check the certifications (though this does not apply to SQL Server 2000).
For a detailed explanation of what I'm referring to, read this.
Are any other apps running on your web server? If so, is your problematic in the same app pool as any of them? If so, try creating a dedicated app pool for it. Maybe one of the other apps is having a problem and is adversely affecting yours.
One thing to watch out for is if you have server side debugging turned on in IIS, the web server will run in single threaded mode.
So if you try to load a page, and someone else has hit that url at the same time, you will be queued up behind them. It will seem like pages take a long time to load, but its simply because the server is doling out page requests in a single file line and sometimes you aren't at the front of the line.
You may have turned this on for debugging and forgot to turn it off for production.