How to emulate "Server closed connection without any data" with Fiddler? - request

How to setup Fiddler to make next steps to reproduce:
Client sending HTTP (no matter what exactly) response to server
Server wait some time (emulate latency). For example, 50-100 ms as in real request.
Server closed connection without any data sent.
I tried play with AutoResponder but didn't find anything there :(
Much closer result which i find: create Empty.dat without data inside and send it as response in AutoReponder. But in this case Fiddler also send some data - auto generated headers:
HTTP/1.1 200 OK with automatic headers
Date: Wed, 09 Oct 2019 12:21:55 GMT
Content-Length: 0
Cache-Control: max-age=0, must-revalidate
Content-Type: application/octet-stream

Related

How does FDM (Free Download Manager) determine a file's date and time?

I assume many of you use download managers. I use the Free Download Manager, and I like it a lot. But I am confused about how this valuable tool determines the date and time of the file it downloads.
For example, I downloaded the PDF file from
https://www.cusd80.com/cms/lib/AZ01001175/Centricity/Domain/10180/The%20Crucible%20Anchor%20Text.pdf
And it sets the file's date-time as following
| Created: ‎Friday, ‎September ‎20, ‎2019, ‏‎6:12:36 PM
When I checked HTTP headers, I found this information
| last-modified: Wed, 12 Sep 2018 15:22:28 GMT
When I checked the creation date-time recorded in the PDF file, I found
| Created: Wed, 12 Sep 2018 8:21:12 GMT
Does anyone have any idea where the 2019 date listed above comes from?

Gmail API - detecting if an email was sent by a 3rd party tool

I am retrieving the header information of emails through the Gmail API and would like to know if an email was sent manually or through a 3rd party tool like e.g. an email campaigning tool or some other tool that sends emails for the user.
I suspect that I can do this with the Received field, which seems to give some information on how the email was sent. However it is not clear how I should understand this information or what structure Google uses for this field. Does anyone have more information on this?
Some examples of values that this field returns (IP addresses and IDs changed):
"from 13933457865 named unknown by gmailapi.google.com with HTTPREST; Tue, 13 Sep 2016 07:49:15 -0700"
"from [127.0.0.1] (ec5-42-77-39-231.us-west-2.compute.amazonaws.com. [10.00.00.221]) by smtp.gmail.com with ESMTPSA id ak3sm22356856pad.19.2016.10.06.04.16.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 06 Oct 2016 04:16:13 -0700 (PDT)"
"by 10.00.00.200 with HTTP; Tue, 20 Sep 2016 18:42:24 -0700 (PDT)"

Apache Proxy Plugin handling of JVM ID in JSESSION Cookie

I am trying to understand the mapping between the JVMID present in the JSESSION Cookie and the ipaddr:port of the managed server. Few questions below -
Who generates the JVMID and how does apache plugin know the JVMID of a given node. Does it get it back in the response from the server (may be as part of the Dynamic Server List?).
If we send a request to an apache with a JSESSION cookie containing a JVMID, and that apache hasn’t handled any requests yet, what would be the behavior?
Assuming that apache maintains a local mapping between JVMIDs and node addresses, how does this get updated? (specially in case of apache restart or a managed server restart)
See more at: http://middlewaremagic.com/weblogic/?p=654#comment-9054
1) The JVM ID is generated from each Weblogic server and appended to the JSESSIONID.
Apache logs the individual server HASH and maps it to the respective Managed server, and is able to send it to the same weblogic managed server as the previous request.
Here is an Example log from http://www.bea-weblogic.com/weblogic-server-support-pattern-common-diagnostic-process-for-proxy-plug-in-problems.html
Mon May 10 13:14:40 2004 getpreferredServersFromCookie: -2032354160!-457294087
Mon May 10 13:14:40 2004 GET Primary JVMID1: -2032354160
Mon May 10 13:14:40 2004 GET Secondary JVMID2: -457294087
Mon May 10 13:14:40 2004 [Found Primary]: 172.18.137.50:38625:65535
Mon May 10 13:14:40 2004 list[0].jvmid: -2032354160
Mon May 10 13:14:40 2004 secondary str: -457294087
Mon May 10 13:14:40 2004 list[1].jvmid: -457294087
Mon May 10 13:14:40 2004 secondary str: -457294087
Mon May 10 13:14:40 2004 [Found Secondary]: 172.18.137.54:38625:65535
Mon May 10 13:14:40 2004 Found 2 servers
2) If the plugin is installed on the new Apache as well, the moment Apache starts up it will ping all available Weblogic servers to report them as Live or Dead (my terms used here, not official) - while doing that health check it gets the JVMID for each available Weblogic. After that when it will receive the first request with a pre-existing JVMID - it can direct correctly.
3) there are some params like DynamicServerList ON - if it's On it keeps polling for Healthy Weblogics, if OFF then it send it to a hardcoded list only. so if On - then it's pretty dynamic

Upload many ftp files at once?

I have about 2,000 small files. I want to upload them to a server using FileZilla and it keeps kicking me out and telling me this:
Status: Delaying connection for 1 second due to previously failed connection attempt...
Status: Resolving address of ....www.blabla.com
Status: Connecting to 10.10.10
Status: Connection established, waiting for welcome message...
Response: 220---------- Welcome to Pure-FTPd [privsep] ----------
Response: 220-You are user number 12 of 500 allowed.
Response: 220-Local time is now 14:46. Server port: 21.
Response: 220-This is a private system - No anonymous login
Response: 220 You will be disconnected after 3 minutes of inactivity.
Command: USER user2
Response: 331 User user2 OK. Password required
Command: PASS ********
Response: 421 I can't accept more than 5 connections as the same user
Is there a better way to upload many files?
Option 1 > Go to Site Manager > transfer settings > Check the limit number of simultaneous connections and give it 1 connection > reconnect
Option 2 > Change FTP password to break current connections > an then reconnect
both options will not working if your hosting server giving a limit for ftp user connections then you must contact your hosting provider or upgrade your account
The log just says that multiple connections are not allowed in your FTP. Make sure that there are not more than five persons connected at the same time.

dotnetnuke 7 redirect to gettingstarted.aspx

I've followed the dnn 7 installation videos to make a single server install. The only slight deviation I made was to create a database up front as demonstrated in the video. I'm using SQL Server Express rather than the full-fat SQL Server, but I followed the install for full fat SQL Server. I did this because I don't understand the conventional way of using SQL Sever Express, when I try to attach to the MDF file from SSMS it tells me that it's locked - even after stopping IIS!
Anyhow, for my first installation using the dnndev.me domain everything worked great.
I then tried to repeat the installation but this time using a real domain - you can see it at www.rotherweb.co.uk. The problem is that the site always gets redirected to the page gettingstarted.aspx.
I have pasted output from Fiddler below:
Result Protocol Host URL Body Caching Content-Type Process Comments Custom
1 200 HTTP www.rotherweb.co.uk /Home.aspx 6,839 private text/html; charset=utf-8 chrome:4200
2 200 HTTP www.rotherweb.co.uk /Resources/Shared/scripts/DotNetNukeAjaxShared.js?=1357564157543 3,393 application/x-javascript chrome:4200
3 200 HTTP www.rotherweb.co.uk /GettingStarted.aspx 7,793 private text/html; charset=utf-8 chrome:4200
4 200 HTTP www.rotherweb.co.uk /Resources/Shared/scripts/widgets.js?=1357564157763 3,732 application/x-javascript chrome:4200
I have noticed that the working website for dnndev.me has sub folders of \portals\0\images , \cache, \Templates, \Users . However the failed website contains only \portals\0\cache. So this would suggest something failed during the installation but when I click to see the installation logs I get "No Installation Log"
Could anyone please help?
Thanks in advance
Rob.
I managed to work around this by creating a database in the default folder c:\program files(x86)\Microsofoft SQL Server etc...
I'm not sure why having the db a different folder caused a problem because I had granted file permission to the folder for the account of the host app pool. If I get chance I will go back and dig a little more.

Resources