I'm a newer in Gatling and I created a POST login request which returns the following response headers:
HTTP/1.1 302
Set-Cookie: JSESSIONID=ECA5F6FEA172B13BF5D445399C9C0962; Path=/; HttpOnly
Location: http://localhost:20001/index;jsessionid=ECA5F6FEA172B13BF5D445399C9C0962
Content-Language: en-US
Content-Length: 0
Date: Thu, 06 May 2021 16:01:20 GMT
I need to extract JSESSIONID value and use it in other requests.
I tried:
.check(regex("JSESSIONID=(.*?);").find.saveAs("token")))
however got an error
> regex(JSESSIONID=(.*?);).findAll.exists, found nothing 1 (100.0%)
Any help would be greatly appreciated!
You need use headerRegex
.check(headerRegex("Set-Cookie", """JSESSIONID=(.*?);"""").saveAs("token"))
Related
I am trying to create a program that logs into a website for me. The problem is, when I follow the redirection the website supplies a unique cookie that I can't figure out how to add to the post request. I have been going through each of the libcurl options on the man page, but I can't find anything that will do this. So far this is the post request function that I have.
void webpost(char* url, char* postdata) {
CURL *handler = curl_easy_init();
CURLcode err;
long size = sizeof(postdata);
if (handler) {
curl_easy_setopt(handler, CURLOPT_URL, url);
curl_easy_setopt(handler, CURLOPT_POSTFIELDSIZE, 50L);
curl_easy_setopt(handler, CURLOPT_POSTFIELDS, postdata);
curl_easy_setopt(handler, CURLOPT_FOLLOWLOCATION, 1L);
curl_easy_setopt(handler, CURLOPT_VERBOSE, 1L);
err = curl_easy_perform(handler);
if (err != CURLE_OK) {
printf("ERROR POST: %s returned (%s)\n", url, curl_easy_strerror(err));
}
curl_easy_cleanup(handler);
}
}
When this function runs, I get the following result.
* Trying 10.10.10.10...
* TCP_NODELAY set
* Connected to website.com (10.10.10.10) port 2048 (#0)
> POST /login HTTP/1.1
Host: website.com
Accept: */*
Content-Length: 50
Content-Type: application/x-www-form-urlencoded
* upload completely sent off: 50 out of 50 bytes
* Mark bundle as not supporting multiuse
< HTTP/1.1 302 Moved temporarily
< Date: Wed, 26 Aug 2020 05:59:50 GMT
< Server: EZproxy
< Expires: Mon, 02 Aug 1999 00:00:00 GMT
< Last-Modified: Wed, 26 Aug 2020 05:59:50 GMT
< Cache-Control: no-store, no-cache, must-revalidate
< Cache-Control: post-check=0, pre-check=0
< Pragma: no-cache
< Set-Cookie: ezproxy=uRZWAo3IsKyR9O0; Path=/; Domain=.website.com
< Location: http://website.com/connect?session=suRZWAo3IsKyR9O0&url=menu
< Connection: close
<
* Closing connection 0
* Issue another request to this URL: 'http://website.com/connect?session=suRZWAo3IsKyR9O0&url=menu'
* Switch from POST to GET
* Hostname website.com was found in DNS cache
* Trying 10.10.10.10...
* TCP_NODELAY set
* Connected to website.com (10.10.10.10) port 2048 (#1)
> GET /connect?session=suRZWAo3IsKyR9O0&url=menu HTTP/1.1
Host: website.com
Accept: */*
* Mark bundle as not supporting multiuse
< HTTP/1.1 302 Moved temporarily
< Date: Wed, 26 Aug 2020 05:59:50 GMT
< Server: EZproxy
< Expires: Mon, 02 Aug 1999 00:00:00 GMT
< Last-Modified: Wed, 26 Aug 2020 05:59:50 GMT
< Cache-Control: no-store, no-cache, must-revalidate
< Cache-Control: post-check=0, pre-check=0
< Pragma: no-cache
< Set-Cookie: ezproxy=uRZWAo3IsKyR9O0; Path=/; Domain=.website.com
< Location: http://website.com/connect?session=ruRZWAo3IsKyR9O0&url=menu
< Connection: close
<
* Closing connection 1
* Issue another request to this URL: 'website.com/connect?session=ruRZWAo3IsKyR9O0&url=menu'
* Hostname website.com was found in DNS cache
* Trying 10.10.10.10...
* TCP_NODELAY set
* Connected to website.com (10.10.10.10) port 2048 (#2)
> GET /connect?session=ruRZWAo3IsKyR9O0&url=menu HTTP/1.1
Host: website.com
Accept: */*
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Wed, 26 Aug 2020 05:59:50 GMT
< Server: EZproxy
< Content-Type: text/html
< Connection: close
<
<html>
<head>
<title>Cookie Required</title>
</head>
<body>
<p>
Licensing agreements for these databases require that access be extended
only to authorized users. Once you have been validated by this system,
a "cookie" is sent to your browser as an ongoing indication of your authorization to
access these databases. This cookie only needs to be set once during login.
</p>
<p>
If you are using a firewall or network privacy program, you may need
reconfigure it to allow cookies to be set from this server.
</p>
<p>
As you access databases, they may also use cookies. Your ability to use those databases
may depend on whether or not you allow those cookies to be set.
</p>
<p>
To login again, click here.
</p>
</body>
</html>
* Closing connection 2
simply add a header in the next request to a site matching the domain and path parameters of the Set-cookie: header that says:
Cookie: ezproxy=uRZWAo3IsKyR9O0
That will be enough for the server to recognize and locate the session you come from, so it can locate the data belonging to your session.
You can read HTTP for a description of the status management mechanism and read about the Cookie and Set-Cookie headers.
Thank you, I was able to get it working by saving the cookie to a file and loading it in the same request.
curl_easy_setopt(handler, CURLOPT_COOKIEJAR, "cookies.txt");
curl_easy_setopt(handler, CURLOPT_COOKIEFILE, "cookies.txt");
I am downloading a Helm chart from https://kubernetes-charts.storage.googleapis.com/redis-0.5.1.tgz. (The fact that it is Redis or related to Helm or anything in particular is irrelevant to this question, which is just about things like Content-Encoding and so on.)
When I check its headers like this:
$ curl -H "Accept-Encoding: gzip" -I https://kubernetes-charts.storage.googleapis.com/redis-0.5.1.tgz
…I do not see a Content-Encoding header in the output, and the Content-Type is listed as being application/x-tar:
HTTP/1.1 200 OK
X-GUploader-UploadID: AEnB2UqBzSXfTToMAdMARXSjJeN0on3jaNY3u74eXcWfvqsOwRpi38Xc6T0XrrmY4otPeySaYRwXyHccHYtChoPAgFQwYZhQMhcpZRWtZURRANGdfRJoupI
Expires: Tue, 27 Jun 2017 00:21:59 GMT
Date: Mon, 26 Jun 2017 23:21:59 GMT
Cache-Control: public, max-age=3600
Last-Modified: Fri, 05 May 2017 03:03:41 GMT
ETag: "e4184c81a58fb731283847222a1f4005"
x-goog-generation: 1493953421241613
x-goog-metageneration: 1
x-goog-stored-content-encoding: identity
x-goog-stored-content-length: 3550
x-goog-meta-goog-reserved-file-mtime: 1493953414
Content-Type: application/x-tar
x-goog-hash: crc32c=bQHveg==
x-goog-hash: md5=5BhMgaWPtzEoOEciKh9ABQ==
x-goog-storage-class: STANDARD
Accept-Ranges: bytes
Content-Length: 3550
Server: UploadServer
Alt-Svc: quic=":443"; ma=2592000; v="39,38,37,36,35"
The resulting file, when downloaded, is a gzipped tar archive.
What is the proper way of programmatically detecting that the payload is in fact gzipped? Or is this a problem with the web server in question?
I think the server is misconfigured. Since .tgz is just abbreviation for .tar.gz it should get the content type application/gzip-
Content-Type: application/x-tar
this header tells you the type, but i'm not sure that's gzip
https://superuser.com/questions/901962/what-is-the-correct-mime-type-for-a-tar-gz-file
see accepted answer at
How to check if a file is gzip compressed? . for way to identify programmatically
My object GETs a file over HTTP.
It does so, using the If-Modified-Since header. If the files has not been modified since the time in the header a Not Modified response will be returned and the file should not be fetched&written. Like so:
class YouTube
#...
def parse
uri = URI("http://i.ytimg.com/vi/#{#id}/0.jpg")
req = Net::HTTP::Get.new(uri.request_uri)
if File.exists? thumbname
stat = File.stat thumbname
req['If-Modified-Since'] = stat.mtime.rfc2822
end
res = Net::HTTP.start(uri.hostname, uri.port) {|http|
http.request(req)
}
File.open(thumbname, 'wb') do |f|
f.write res.body
end if res.is_a?(Net::HTTPSuccess)
end
#...
end
I want to test both cases (in both cases, a file on disk exists). To do so, I'd need to stub either something in Net::HTTP, or I need to stub the File.stat to return an mtime for which I a sure the online resource will return a new or Not-modified-since.
Should I stub (or even mock) Net::HTTP? And if so, what?
Or should I stub mtime to return a date far in the past or far in the future to enforce or suppress the Not-modified Header?
Edit: Diving deeper into the matter, I learned that the i.ytimg.com-domain does not support these headers. So i'll need to solve this by inspecing JSON from the YouTube API. However, the problem "What and how to mock when testing if-modified-since-headers" still stands.
Here is how I conclude the domain does not support this:
$curl -I --header 'If-Modified-Since: Sun, 24 Mar 2013 17:33:29 +0100' -L http://i.ytimg.com/vi/D80QdsFWdcQ/0.jpg
HTTP/1.1 200 OK
Content-Type: image/jpeg
Date: Sun, 24 Mar 2013 16:31:10 GMT
Expires: Sun, 24 Mar 2013 22:31:10 GMT
X-Content-Type-Options: nosniff
Server: sffe
Content-Length: 13343
X-XSS-Protection: 1; mode=block
Cache-Control: public, max-age=21600
Age: 822
There is no "Last-Modified" there. Illustrated with another call, to exaple.com, which does support the if-modified-since headers.
$curl -I --header 'If-Modified-Since: Sun, 24 Mar 2013 17:33:29 +0100' -L example.com
HTTP/1.0 302 Found
Location: http://www.iana.org/domains/example/
Server: BigIP
Connection: Keep-Alive
Content-Length: 0
HTTP/1.1 302 FOUND
Date: Sun, 24 Mar 2013 16:47:47 GMT
Server: Apache/2.2.3 (CentOS)
Location: http://www.iana.org/domains/example
Connection: close
Content-Type: text/html; charset=utf-8
HTTP/1.1 304 NOT MODIFIED
Date: Sun, 24 Mar 2013 16:47:47 GMT
Server: Apache/2.2.3 (CentOS)
Connection: close
I have an AngularJS app that parses HTML which largely comes from emails. In some cases data-bind-html will throw a Parse Error but not all cases. I've been unable to determine why.
Does anyone know some types of tokens or syntax that can cause the error?
Here's a sample of a file which trips it up:
,
I received the following error message...:
------------------------------------------------------------------------ The server encountered an unexpected condition that prevented it from
fulfilling the request.
HTTP_Status = 500 (Internal Server Error)
URL =
----------------------------------------- Request Headers
----------------------------------------- POST /ss/servlet/FooServlet/ HTTP/1.1 Accept: Accept: / Host: mydomain.org Content-Length: 141
User-Agent: FooBar/2.1.94 Pragma: no-cache Cache-Control: no-cache
Content-Type: application/x-www-form-urlencoded; charset="utf-8"
Connection: Keep-Alive Cookie:
BIGipServerpool_cookie_apps_ss_8188=rd860o00000000000000000000ffff0a0ad0aco8188;
JSESSIONID=5215F941A173B6127E9A95B3E99E3A74
----------------------------------------- Response Headers
----------------------------------------- HTTP/1.1 500 Internal Server Error Server: Apache-Coyote/1.1 Set-Cookie:
JSESSIONID=A9B7C98E5359D961DC8958F87CCCF49E; Path=/ss
Content-Disposition: attachment; filename="spreadsheet.csv"
Content-Description: spreadsheet.csv Content-Transfer-Encoding: binary
Content-Type: application/csv;charset=ISO-8859-1 Transfer-Encoding:
chunked Date: Wed, 06 Mar 2013 18:46:19 GMT Connection: close
-------------...
Emails can contain a lot of arbitrary encoding and invalid HTML, such as <email#domain.com>. To eliminate the Parse Errors I've implemented my own filter which takes effect before it goes through ngSanitize/bind-html.
ng-bind-html="obj.emailContent | sanitizeEmail"
myModule.filter('sanitizeEmail', function() {
return function(input) {
return input.replace(/<[\w-]*\.[\w-]*>/g, '').replace(/<[\w\.\$-]+[\:#].*>/g, '');
};
});
Quite simply, I'm trying to generate and download a CSV file from a CakePHP controller. No problem generating the CSV, and everything works until the response >= 4096 bytes.
The following controller illustrates the problem:
class TestTypeController extends Controller {
public function csv($size = 100) {
# set the content type
Configure::write('debug', 0);
$this->autoRender = false;
$this->response->type('csv');
# send the response
for ($i = 0; $i < $size; $i++)
echo 'x';
}
}
When I call http://baseurl/test_type/csv/4095, I'm prompted to save the file, and the Content-Type header is text/csv. The response headers are:
HTTP/1.1 200 OK
Date: Tue, 05 Jun 2012 14:28:56 GMT
Server: Apache/2.2.22 (Ubuntu)
X-Powered-By: PHP/5.3.10-1ubuntu3.1
Content-Length: 4095
Keep-Alive: timeout=5, max=98
Connection: Keep-Alive
Content-Type: text/csv; charset=UTF-8
When I call http://baseurl/test_type/csv/4096, the file is printed to the screen, and the response headers are:
HTTP/1.1 200 OK
Date: Tue, 05 Jun 2012 14:28:53 GMT
Server: Apache/2.2.22 (Ubuntu)
X-Powered-By: PHP/5.3.10-1ubuntu3.1
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 38
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/html
Obviously, 4kB is the limit where Content-Encoding starts gzipping the response. I'm not familiar with how the Content-Type is meant to react, but I'd obviously prefer it to remain text/csv.
The same problem occurs using the RequestHandlerComponent to manage the type of the response.
I'm using the CakePHP 2.2.0-RC1, but I've verified the problem exists with stable 2.1.3. Any ideas? Pointers in the right direction?
The answer was pretty simple -- the controller should be returning the CSV data instead of echoing it.