Receiving "message":"CB-ACCESS-KEY header is required" when attempting to connect to coinbase pro api - coinbase-api

I have been trying to mock up a cli script to connect to the coinbase pro api.
I have been able to get it to work successfully using python but I am looking to have a cli-based solution.
#!/bin/bash
path="/accounts/"
body=""
function make_request(){
method="$1"
requestpath="$2"
body="$3"
timestamp=$(date +%s)
api_key=$(head -n 1 ./api_key)
sec_key=$(head -n 1 ./sec_key)
passphr=$(head -n 1 ./passphrase)
hmac_key=$(echo -n $sec_key | base64 -d)
message="$timestamp$method$requestpath$body"
# create a sha256 hmac with the secret
signature_b64=$(echo -n $message | openssl dgst -sha256 -hmac $hmac_key -binary | base64)
user_agent="Mozilla/5.0 (X11; Linux i686; rv:59.0) Gecko/20100101 Firefox/59.0"
url="https://api.pro.coinbase.com"
## Build Header info ##
cont_type="'Content-Type': 'Application/JSON'"
accpt_enc="'Accept-Encoding': 'gzip, deflate'"
head_sign="'CB-ACCESS-SIGN': '"$signature_b64"'"
head_time="'CB-ACCESS-TIMESTAMP': '"$timestamp"'"
head_key="'CB-ACCESS-KEY': '"$api_key"'"
head_pass="'CB-ACCESS-PASSPHRASE': '"$passphr"'"
# data=$( curl -v \
# -A "$user_agent" \
# -H "$head_key" \
# -H "$cont_type" \
# -H "$head_sign" \
# -H "$head_time" \
# -H "$head_pass" \
# "$url$path" )
data=$( wget --verbose \
--debug \
--user-agent="$user_agent" \
--header="$head_key" \
--header="$cont_type" \
--header="$head_sign" \
--header="$head_time" \
--header="$head_pass" \
"$url$path" )
echo $data
}
make_request GET "$path" "$body"
I get the same error whether I use curl or wget.
I am not sure what is going on.
I have compared my headers from debug and verbose and they appear to be simmilar to me to my python headers.
Example python headers:
/accounts/
{'User-Agent': 'python-requests/2.20.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Cookie': '__cfduid=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'Content-Type': 'Application/JSON', 'CB-ACCESS-SIGN': 'Gk+xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=', 'CB-ACCESS-TIMESTAMP': '1592491810.1017096', 'CB-ACCESS-KEY': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'CB-ACCESS-PASSPHRASE': 'xxxxxxxxxxx'}
wget headers:
---request begin---
GET /accounts/ HTTP/1.1
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:59.0) Gecko/20100101 Firefox/59.0
Accept: */*
Accept-Encoding: identity
Host: api.pro.coinbase.com
Connection: Keep-Alive
'CB-ACCESS-KEY': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
'Content-Type': 'Application/JSON'
'CB-ACCESS-SIGN': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxQ='
'CB-ACCESS-TIMESTAMP': '1592492837'
'CB-ACCESS-PASSPHRASE': 'xxxxxxxxxxx'
---request end---
curl headers:
> GET /accounts/ HTTP/2
> Host: api.pro.coinbase.com
> user-agent: Mozilla/5.0 (X11; Linux i686; rv:59.0) Gecko/20100101 Firefox/59.0
> accept: */*
> 'cb-access-key': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
> 'content-type': 'Application/JSON'
> 'cb-access-sign': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxc='
> 'cb-access-timestamp': '1592439326'
> 'cb-access-passphrase': 'xxxxxxxxxxx'

Related

Modifying JSON inside bash loop using jq - reading/writing from files or store all data at vars?

I have the following JSON template file - tmpl.json
{
"locations": [],
"name": "",
"script": {
"events": [
{
"description": "",
"type": "navigate",
"url": "",
"wait": {
"waitFor": "page_complete"
}
}
],
"version": "1.0"
},
"type": "BROWSER"
}
I need to use the above file as a template and add content into .locations[], .name, .script.events[].description and .script.events[].url inside a loop dynamically and then use in the same loop the resulted JSON with curl PUT call.
The content of the locations[] which needs to be added is a static array in a separate loc.json file:
["LOCATION-577B","LOCATION-D7FF","LOCATION-8BE4","LOCATION-0CE9"]
While the values for other keys are calculated dynamically inside the loop.
Here is the way I manipulate the data to create a temporary JSON file for each iteration of the loop. $1 is the parameter calculated in the loop and then passed to the function to create temporary JSON which then is used with the curl.
jq --slurpfile loc "loc.json" \
--arg URL "$1" \
'.locations|=$loc[] |
.name=$URL |
.script.events[].url=$URL |
.script.events[].description="Loading of URL \"" +$URL + "\""' \
"tmpl.json" >"$1-temp.json"
While the above works, I don't consider it a very clean or efficient way to deal with the problem. I need to iterate the loop over 1000 times which means creating 1000 temporary files locally and clean up afterward.
What would be a better way to deal with the problem? Read both the static locations array and the template files into variables via heredocs and use them inside the loop?
Or assign the resulted JSON output to a variable and then use it in the curl PUT call?
However, in the latter case, careful handling needs to be done of whitespaces and other special characters... The template file I've shown is just a fragment from the whole JSON file which contains way more key/values, but I need to modify only the keys outlined in the above.
Update/clarification: $1 parameter used in the jq call is a single URL without http/https prefix. The list of URLs is calculated using another function/jq call and assigned to a bash var $URL_list. Then this var is used in the for loop to call the function which creates updates JSON for each URL.
The REST calls to also use curl_combined_parameter and curl_combined_update_put vars which are full requests with various parameters but they don't have a relation with the problem I am trying to solve.
So the cut-down version of the whole script is the following:
#!/bin/bash
# Initiate the REST call which generates URL list
function get_url_list() {
# The function initiates REST call via curl with the $curl_combined_parameter,
# pipes the result to `jq`, and assigns the resulted list to a var URL_list.
services_json=$(curl -s \
--location \
--request GET \
"$curl_combined_parameter" \
--header "Authorization: Bearer ${token}")
# Now we filter the resulted `json` and get the list of sites
URL_list=$(echo "$services_json" |
jq -r ' map(.Body[].webServerName |
select( (. != null and endswith(":443") ) and ( test("commerce|backoffice") | not ) ) ) |
unique[] | .[0:-4] ')
}
function update_json() {
jq --slurpfile loc "loc.json" \
--arg URL "$1" \
'.locations|=$loc[] |
.name=$URL |
.script.events[].url=$URL |
.script.events[].description="Loading of URL \"" +$URL + "\""' \
"tmpl.json" >"$1-temp.json"
}
push_changes(){
# take the resulted `JSON` generated from `update_json` and push it via curl PUT call
curl --location \
--request PUT \
"$curl_combined_update_put" \
-H "accept: application/json; charset=utf-8" \
-H "Content-Type: application/json; charset=utf-8" \
-H "Authorization: Bearer ${token}" \
-d "#$1-temp.json" >>"$1-updated.json"
}
for i in $URL_list; do
update_json "$i"
push_changes "$i"
done
Suggestions are welcome. I just want to reduce creating unnecessary temp files and encapsulate all data inside the script.
Thanks.
First, we're going to want to make your URL_list be an array instead of a string:
readarray -t URL_list < <(jq -r 'map(.Body[].webServerName |
select( (. != null and endswith(":443") ) and
( test("commerce|backoffice") | not ) ) ) |
unique[] | .[0:-4]' <<<"$services_json")
Next, we're going to make only one copy of jq take all your URLs as line-oriented input, and emit one JSON document per URL on stdout, with a tab between the URL and the document itself:
build_updated_json_documents() {
jq --slurpfile loc "loc.json" \
--argjson tmpl "$(<tmpl.json)" \
-c -Rr '
($tmpl | .locations|=$loc[]) as $tmpl_with_loc |
. as $URL |
($tmpl_with_loc |
.name=$URL |
.script.events[].url=$URL |
.script.events[].description="Loading of URL \"\($URL)\"") |
"\($URL)\t\(. | tojson)"
' < <(printf '%s\n' "${URL_list[#]}")
}
...and pipe the resulting stream to a function that reads it line-by-line and does the curl requests, piping direct from the copy of curl that would otherwise be generating a temp file to the one that's coming up with your final result to be stored:
handle_each_document() {
while IFS=$'\t' read -r url doc; do
# first, ask our remote server to update this document for us
# (would be nice if the server would do this in bulk, no?)
# ...and then forward that request to the other server.
curl --location \
--request PUT \
"$curl_combined_update_put" \
-H "accept: application/json; charset=utf-8" \
-H "Content-Type: application/json; charset=utf-8" \
-H "Authorization: Bearer ${token}" \
-d- <<<"$doc" \
| curl --location \
--request PUT \
"$curl_combined_update_put" \
-H "accept: application/json; charset=utf-8" \
-H "Content-Type: application/json; charset=utf-8" \
-H "Authorization: Bearer ${token}" \
-d- >"${url}-updated.json"
done
}
build_updated_json_documents | handle_each_document

POST data in json array format with curl command in linux

I am able to POST data with curl in the JSON format with the loop in bash script
Current
$(curl -o /dev/null -s -X POST “${url}" -H "accept: application/json" -H "Authorization: ${base}” -H "Content-Type: application/json" -d “{\”data1\”:\”${data1}\”,\”data2\”:\”${data2}\”}”)
[
{
"data1": "data1",
"data2”: "data2”
},
{
"data3”: "data3”,
"data4”: "data4”
}
]
But my requirement is to achieve below format in JSON array with curl. Please help me to achieve me to append JSON with curl command with every iteration of loop.
{“Array": [
{
"data1": "data1",
"data2”: "data2”
},
{
"data3”: "data3”,
"data4”: "data4”
}
]
}

count number http response

Iam basic with shell script, I want to count from log file like this:
192.168.1.4 - - [16/May/2019:01:18:07 +0000] "GET /api/v1/tests HTTP/1.1" 200 422 "-" "okhttp/3.11.0"
192.168.1.3 - - [16/May/2019:01:54:24 +0000] "POST /api/v1/test HTTP/1.1" 201 138 "-" "okhttp/3.12.1"
and the output should be:
number, count
200, 1
201, 1
I`ve tried with wc command but did not find solution
wc -l or wc -w
the error did not find
uniq can work in your case like below :
egrep -o '200|201' <filename> |sort|uniq -c
To use array, you can do something like this :
my_array=(200 201)
for i in ${my_array[*]};do egrep -o ${i} <filename> |sort|uniq -c ;done;

Trying to run snort 3.0 on ubuntu 18.04 , I get the following error

victim#victim:~$ sudo snort -c /etc/snort/snort.conf
--------------------------------------------------
o")~ Snort++ 3.0.0-250
--------------------------------------------------
Loading /etc/snort/snort.conf:
FATAL: can't load /etc/snort/snort.conf: /etc/snort/snort.conf:2: unexpected symbol near '#'
Fatal Error, Quitting..
I have tried previous post but nothing works.
#--------------------------------------------------
# VRT Rule Packages Snort.conf
#
# For more information visit us at:
# http://www.snort.org Snort Website
# http://vrt-sourcefire.blogspot.com/ Sourcefire VRT Blog
#
# Mailing list Contact: snort-sigs#lists.sourceforge.net
# False Positive reports: fp#sourcefire.com
# Snort bugs: bugs#snort.org
#
# Compatible with Snort Versions:
# VERSIONS : 2.9.0.0
#
# Snort build options:
# OPTIONS : --enable-ipv6 --enable-gre --enable-mpls --enable-targetbased --enable-decoder-preprocessor-rules --enable-ppm --enable-perfprofiling --enable-zlib --enable-active-response --enable-normalizer --enable-reload --enable-react --enable-flexresp3
#--------------------------------------------------
###################################################
# This file contains a sample snort configuration.
# You should take the following steps to create your own custom configuration:
#
# 1) Set the network variables.
# 2) Configure the decoder
# 3) Configure the base detection engine
# 4) Configure dynamic loaded libraries
# 5) Configure preprocessors
# 6) Configure output plugins
# 7) Customize your rule set
# 8) Customize preprocessor and decoder rule set
# 9) Customize shared object rule set
###################################################
###################################################
# Step #1: Set the network variables. For more information, see README.variables
###################################################
# Setup the network addresses you are protecting
ipvar HOME_NET any
# Set up the external network addresses. Leave as "any" in most situations
ipvar EXTERNAL_NET any
# List of DNS servers on your network
ipvar DNS_SERVERS $HOME_NET
# List of SMTP servers on your network
ipvar SMTP_SERVERS $HOME_NET
# List of web servers on your network
ipvar HTTP_SERVERS $HOME_NET
# List of sql servers on your network
ipvar SQL_SERVERS $HOME_NET
# List of telnet servers on your network
ipvar TELNET_SERVERS $HOME_NET
# List of ssh servers on your network
ipvar SSH_SERVERS $HOME_NET
# List of ports you run web servers on
portvar HTTP_PORTS [80,311,591,593,901,1220,1414,2301,2381,2809,3128,3702,5250,7777,7779,8000,8008,8028,8080,8088,8118,8123,8180,8243,8280,8888,9090,9091,9443,9999,11371]
# List of ports you want to look for SHELLCODE on.
portvar SHELLCODE_PORTS !80
# List of ports you might see oracle attacks on
portvar ORACLE_PORTS 1024:
# List of ports you want to look for SSH connections on:
portvar SSH_PORTS 22
# other variables, these should not be modified
ipvar AIM_SERVERS [64.12.24.0/23,64.12.28.0/23,64.12.161.0/24,64.12.163.0/24,64.12.200.0/24,205.188.3.0/24,205.188.5.0/24,205.188.7.0/24,205.188.9.0/24,205.188.153.0/24,205.188.179.0/24,205.188.248.0/24]
# Path to your rules files (this can be a relative path)
# Note for Windows users: You are advised to make this an absolute path,
# such as: c:\snort\rules
var RULE_PATH /etc/snort/rules
var SO_RULE_PATH /etc/snort/so_rules
var PREPROC_RULE_PATH /etc/snort/preproc_rules
###################################################
# Step #2: Configure the decoder. For more information, see README.decode
###################################################
# Stop generic decode events:
config disable_decode_alerts
# Stop Alerts on experimental TCP options
config disable_tcpopt_experimental_alerts
# Stop Alerts on obsolete TCP options
config disable_tcpopt_obsolete_alerts
# Stop Alerts on T/TCP alerts
config disable_tcpopt_ttcp_alerts
# Stop Alerts on all other TCPOption type events:
config disable_tcpopt_alerts
# Stop Alerts on invalid ip options
config disable_ipopt_alerts
# Alert if value in length field (IP, TCP, UDP) is greater th elength of the packet
# config enable_decode_oversized_alerts
# Same as above, but drop packet if in Inline mode (requires enable_decode_oversized_alerts)
# config enable_decode_oversized_drops
# Configure IP / TCP checksum mode
config checksum_mode: all
# Configure maximum number of flowbit references. For more information, see README.flowbits
# config flowbits_size: 64
# Configure ports to ignore
# config ignore_ports: tcp 21 6667:6671 1356
# config ignore_ports: udp 1:17 53
# Configure active response for non inline operation. For more information, see REAMDE.active
# config response: eth0 attempts 2
###################################################
# Step #3: Configure the base detection engine. For more information, see README.decode
###################################################
# Configure PCRE match limitations
config pcre_match_limit: 3500
config pcre_match_limit_recursion: 1500
# Configure the detection engine See the Snort Manual, Configuring Snort - Includes - Config
config detection: search-method ac-split search-optimize max-pattern-len 20
# Configure the event queue. For more information, see README.event_queue
config event_queue: max_queue 8 log 3 order_events content_length
###################################################
# Per packet and rule latency enforcement
# For more information see README.ppm
###################################################
# Per Packet latency configuration
#config ppm: max-pkt-time 250, \
# fastpath-expensive-packets, \
# pkt-log
# Per Rule latency configuration
#config ppm: max-rule-time 200, \
# threshold 3, \
# suspend-expensive-rules, \
# suspend-timeout 20, \
# rule-log alert
###################################################
# Configure Perf Profiling for debugging
# For more information see README.PerfProfiling
###################################################
#config profile_rules: print all, sort avg_ticks
#config profile_preprocs: print all, sort avg_ticks
###################################################
# Step #4: Configure dynamic loaded libraries.
# For more information, see Snort Manual, Configuring Snort - Dynamic Modules
###################################################
# path to dynamic preprocessor libraries
dynamicpreprocessor directory /usr/local/lib/snort_dynamicpreprocessor/
# path to base preprocessor engine
dynamicengine /usr/local/lib/snort_dynamicengine/libsf_engine.so
# path to dynamic rules libraries
dynamicdetection directory /usr/local/lib/snort_dynamicrules
###################################################
# Step #5: Configure preprocessors
# For more information, see the Snort Manual, Configuring Snort - Preprocessors
###################################################
# Inline packet normalization. For more information, see README.normalize
# Does nothing in IDS mode
preprocessor normalize_ip4
preprocessor normalize_tcp: ips ecn stream
preprocessor normalize_icmp4
preprocessor normalize_ip6
preprocessor normalize_icmp6
# Target-based IP defragmentation. For more inforation, see README.frag3
preprocessor frag3_global: max_frags 65536
preprocessor frag3_engine: policy windows detect_anomalies overlap_limit 10 min_fragment_length 100 timeout 180
# Target-Based stateful inspection/stream reassembly. For more inforation, see README.stream5
preprocessor stream5_global: max_tcp 8192, track_tcp yes, track_udp yes, track_icmp no max_active_responses 2 min_response_seconds 5
preprocessor stream5_tcp: policy windows, detect_anomalies, require_3whs 180, \
overlap_limit 10, small_segments 3 bytes 150, timeout 180, \
ports client 21 22 23 25 42 53 79 109 110 111 113 119 135 136 137 139 143 \
161 445 513 514 587 593 691 1433 1521 2100 3306 6665 6666 6667 6668 6669 \
7000 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779, \
ports both 80 311 443 465 563 591 593 636 901 989 992 993 994 995 1220 1414 2301 2381 2809 3128 3702 5250 6907 7702 7777 7779 \
7801 7900 7901 7902 7903 7904 7905 7906 7908 7909 7910 7911 7912 7913 7914 7915 7916 \
7917 7918 7919 7920 8000 8008 8028 8080 8088 8118 8123 8180 8243 8280 8888 9090 9091 9443 9999 11371
preprocessor stream5_udp: timeout 180
# performance statistics. For more information, see the Snort Manual, Configuring Snort - Preprocessors - Performance Monitor
# preprocessor perfmonitor: time 300 file /var/snort/snort.stats pktcnt 10000
# HTTP normalization and anomaly detection. For more information, see README.http_inspect
preprocessor http_inspect: global iis_unicode_map unicode.map 1252 compress_depth 20480 decompress_depth 20480
preprocessor http_inspect_server: server default \
chunk_length 500000 \
server_flow_depth 0 \
client_flow_depth 0 \
post_depth 65495 \
oversize_dir_length 500 \
max_header_length 750 \
max_headers 100 \
ports { 80 311 591 593 901 1220 1414 2301 2381 2809 3128 3702 5250 7777 7779 8000 8008 8028 8080 8088 8118 8123 8180 8243 8280 8888 9090 9091 9443 9999 11371 } \
non_rfc_char { 0x00 0x01 0x02 0x03 0x04 0x05 0x06 0x07 } \
enable_cookie \
extended_response_inspection \
inspect_gzip \
normalize_utf \
unlimited_decompress \
apache_whitespace no \
ascii no \
bare_byte no \
base36 no \
directory no \
double_decode no \
iis_backslash no \
iis_delimiter no \
iis_unicode no \
multi_slash no \
utf_8 no \
u_encode yes \
webroot no
# ONC-RPC normalization and anomaly detection. For more information, see the Snort Manual, Configuring Snort - Preprocessors - RPC Decode
preprocessor rpc_decode: 111 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779 no_alert_multiple_requests no_alert_large_fragments no_alert_incomplete
# Back Orifice detection.
preprocessor bo
# FTP / Telnet normalization and anomaly detection. For more information, see README.ftptelnet
preprocessor ftp_telnet: global inspection_type stateful encrypted_traffic no
preprocessor ftp_telnet_protocol: telnet \
ayt_attack_thresh 20 \
normalize ports { 23 } \
detect_anomalies
preprocessor ftp_telnet_protocol: ftp server default \
def_max_param_len 100 \
ports { 21 2100 3535 } \
telnet_cmds yes \
ignore_telnet_erase_cmds yes \
ftp_cmds { ABOR ACCT ADAT ALLO APPE AUTH CCC CDUP } \
ftp_cmds { CEL CLNT CMD CONF CWD DELE ENC EPRT } \
ftp_cmds { EPSV ESTA ESTP FEAT HELP LANG LIST LPRT } \
ftp_cmds { LPSV MACB MAIL MDTM MIC MKD MLSD MLST } \
ftp_cmds { MODE NLST NOOP OPTS PASS PASV PBSZ PORT } \
ftp_cmds { PROT PWD QUIT REIN REST RETR RMD RNFR } \
ftp_cmds { RNTO SDUP SITE SIZE SMNT STAT STOR STOU } \
ftp_cmds { STRU SYST TEST TYPE USER XCUP XCRC XCWD } \
ftp_cmds { XMAS XMD5 XMKD XPWD XRCP XRMD XRSQ XSEM } \
ftp_cmds { XSEN XSHA1 XSHA256 } \
alt_max_param_len 0 { ABOR CCC CDUP ESTA FEAT LPSV NOOP PASV PWD QUIT REIN STOU SYST XCUP XPWD } \
alt_max_param_len 200 { ALLO APPE CMD HELP NLST RETR RNFR STOR STOU XMKD } \
alt_max_param_len 256 { CWD RNTO } \
alt_max_param_len 400 { PORT } \
alt_max_param_len 512 { SIZE } \
chk_str_fmt { ACCT ADAT ALLO APPE AUTH CEL CLNT CMD } \
chk_str_fmt { CONF CWD DELE ENC EPRT EPSV ESTP HELP } \
chk_str_fmt { LANG LIST LPRT MACB MAIL MDTM MIC MKD } \
chk_str_fmt { MLSD MLST MODE NLST OPTS PASS PBSZ PORT } \
chk_str_fmt { PROT REST RETR RMD RNFR RNTO SDUP SITE } \
chk_str_fmt { SIZE SMNT STAT STOR STRU TEST TYPE USER } \
chk_str_fmt { XCRC XCWD XMAS XMD5 XMKD XRCP XRMD XRSQ } \
chk_str_fmt { XSEM XSEN XSHA1 XSHA256 } \
cmd_validity ALLO < int [ char R int ] > \
cmd_validity EPSV < { char 12|string } > \
cmd_validity MACB < string > \
cmd_validity MDTM < [ date nnnnnnnnnnnnnn[.n[n[n]]] ] string > \
cmd_validity MODE < char ASBCZ > \
cmd_validity PORT < host_port > \
cmd_validity PROT < char CSEP > \
cmd_validity STRU < char FRPO [ string ] > \
cmd_validity TYPE < { char AE [ char NTC ] | char I | char L [ number ] } >
preprocessor ftp_telnet_protocol: ftp client default \
max_resp_len 256 \
bounce yes \
ignore_telnet_erase_cmds yes \
telnet_cmds yes
# SMTP normalization and anomaly detection. For more information, see README.SMTP
preprocessor smtp: ports { 25 465 587 691 } \
inspection_type stateful \
enable_mime_decoding \
max_mime_depth 20480 \
normalize cmds \
normalize_cmds { ATRN AUTH BDAT CHUNKING DATA DEBUG EHLO EMAL ESAM ESND ESOM ETRN EVFY } \
normalize_cmds { EXPN HELO HELP IDENT MAIL NOOP ONEX QUEU QUIT RCPT RSET SAML SEND SOML } \
normalize_cmds { STARTTLS TICK TIME TURN TURNME VERB VRFY X-ADAT X-DRCP X-ERCP X-EXCH50 } \
normalize_cmds { X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \
max_command_line_len 512 \
max_header_line_len 1000 \
max_response_line_len 512 \
alt_max_command_line_len 260 { MAIL } \
alt_max_command_line_len 300 { RCPT } \
alt_max_command_line_len 500 { HELP HELO ETRN EHLO } \
alt_max_command_line_len 255 { EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET } \
alt_max_command_line_len 246 { SEND SAML SOML AUTH TURN ETRN DATA RSET QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \
valid_cmds { ATRN AUTH BDAT CHUNKING DATA DEBUG EHLO EMAL ESAM ESND ESOM ETRN EVFY } \
valid_cmds { EXPN HELO HELP IDENT MAIL NOOP ONEX QUEU QUIT RCPT RSET SAML SEND SOML } \
valid_cmds { STARTTLS TICK TIME TURN TURNME VERB VRFY X-ADAT X-DRCP X-ERCP X-EXCH50 } \
valid_cmds { X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \
xlink2state { enabled }
# Portscan detection. For more information, see README.sfportscan
# preprocessor sfportscan: proto { all } memcap { 10000000 } sense_level { low }
# ARP spoof detection. For more information, see the Snort Manual - Configuring Snort - Preprocessors - ARP Spoof Preprocessor
# preprocessor arpspoof
# preprocessor arpspoof_detect_host: 192.168.40.1 f0:0f:00:f0:0f:00
# SSH anomaly detection. For more information, see README.ssh
preprocessor ssh: server_ports { 22 } \
autodetect \
max_client_bytes 19600 \
max_encrypted_packets 20 \
max_server_version_len 100 \
enable_respoverflow enable_ssh1crc32 \
enable_srvoverflow enable_protomismatch
# SMB / DCE-RPC normalization and anomaly detection. For more information, see README.dcerpc2
preprocessor dcerpc2: memcap 102400, events [co ]
preprocessor dcerpc2_server: default, policy WinXP, \
detect [smb [139,445], tcp 135, udp 135, rpc-over-http-server 593], \
autodetect [tcp 1025:, udp 1025:, rpc-over-http-server 1025:], \
smb_max_chain 3
# DNS anomaly detection. For more information, see README.dns
preprocessor dns: ports { 53 } enable_rdata_overflow
# SSL anomaly detection and traffic bypass. For more information, see README.ssl
preprocessor ssl: ports { 443 465 563 636 989 992 993 994 995 7801 7702 7900 7901 7902 7903 7904 7905 7906 6907 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 }, trustservers, noinspect_encrypted
# SDF sensitive data preprocessor. For more information see README.sensitive_data
preprocessor sensitive_data: alert_threshold 25
###################################################
# Step #6: Configure output plugins
# For more information, see Snort Manual, Configuring Snort - Output Modules
###################################################
# unified2
# Recommended for most installs
# output unified2: filename merged.log, limit 128, nostamp, mpls_event_types, vlan_event_types
# Additional configuration for specific types of installs
# output alert_unified2: filename snort.alert, limit 128, nostamp
# output log_unified2: filename snort.log, limit 128, nostamp
# syslog
# output alert_syslog: LOG_AUTH LOG_ALERT
# pcap
# output log_tcpdump: tcpdump.log
# database
# output database: alert, <db_type>, user=<username> password=<password> test dbname=<name> host=<hostname>
# output database: log, <db_type>, user=<username> password=<password> test dbname=<name> host=<hostname>
# prelude
# output alert_prelude
# metadata reference data. do not modify these lines
include classification.config
include reference.config
###################################################
# Step #7: Customize your rule set
# For more information, see Snort Manual, Writing Snort Rules
#
# NOTE: All categories are enabled in this conf file
###################################################
# site specific rules
include $RULE_PATH/local.rules
include $RULE_PATH/attack-responses.rules
include $RULE_PATH/backdoor.rules
include $RULE_PATH/bad-traffic.rules
include $RULE_PATH/blacklist.rules
include $RULE_PATH/botnet-cnc.rules
include $RULE_PATH/chat.rules
include $RULE_PATH/content-replace.rules
include $RULE_PATH/ddos.rules
include $RULE_PATH/dns.rules
include $RULE_PATH/dos.rules
include $RULE_PATH/exploit.rules
include $RULE_PATH/finger.rules
include $RULE_PATH/ftp.rules
include $RULE_PATH/icmp.rules
include $RULE_PATH/icmp-info.rules
include $RULE_PATH/imap.rules
include $RULE_PATH/info.rules
include $RULE_PATH/misc.rules
include $RULE_PATH/multimedia.rules
include $RULE_PATH/mysql.rules
include $RULE_PATH/netbios.rules
include $RULE_PATH/nntp.rules
include $RULE_PATH/oracle.rules
include $RULE_PATH/other-ids.rules
include $RULE_PATH/p2p.rules
include $RULE_PATH/phishing-spam.rules
include $RULE_PATH/policy.rules
include $RULE_PATH/pop2.rules
include $RULE_PATH/pop3.rules
include $RULE_PATH/rpc.rules
include $RULE_PATH/rservices.rules
include $RULE_PATH/scada.rules
include $RULE_PATH/scan.rules
include $RULE_PATH/shellcode.rules
include $RULE_PATH/smtp.rules
include $RULE_PATH/snmp.rules
include $RULE_PATH/specific-threats.rules
include $RULE_PATH/spyware-put.rules
include $RULE_PATH/sql.rules
include $RULE_PATH/telnet.rules
include $RULE_PATH/tftp.rules
include $RULE_PATH/virus.rules
include $RULE_PATH/voip.rules
include $RULE_PATH/web-activex.rules
include $RULE_PATH/web-attacks.rules
include $RULE_PATH/web-cgi.rules
include $RULE_PATH/web-client.rules
include $RULE_PATH/web-coldfusion.rules
include $RULE_PATH/web-frontpage.rules
include $RULE_PATH/web-iis.rules
include $RULE_PATH/web-misc.rules
include $RULE_PATH/web-php.rules
include $RULE_PATH/x11.rules
###################################################
# Step #8: Customize your preprocessor and decoder alerts
# For more information, see README.decoder_preproc_rules
###################################################
# decoder and preprocessor event rules
# include $PREPROC_RULE_PATH/preprocessor.rules
# include $PREPROC_RULE_PATH/decoder.rules
# include $PREPROC_RULE_PATH/sensitive-data.rules
###################################################
# Step #9: Customize your Shared Object Snort Rules
# For more information, see http://vrt-sourcefire.blogspot.com/2009/01/using-vrt-certified-shared-object-rules.html
###################################################
# dynamic library rules
# include $SO_RULE_PATH/bad-traffic.rules
# include $SO_RULE_PATH/chat.rules
# include $SO_RULE_PATH/dos.rules
# include $SO_RULE_PATH/exploit.rules
# include $SO_RULE_PATH/icmp.rules
# include $SO_RULE_PATH/imap.rules
# include $SO_RULE_PATH/misc.rules
# include $SO_RULE_PATH/multimedia.rules
# include $SO_RULE_PATH/netbios.rules
# include $SO_RULE_PATH/nntp.rules
# include $SO_RULE_PATH/p2p.rules
# include $SO_RULE_PATH/smtp.rules
# include $SO_RULE_PATH/sql.rules
# include $SO_RULE_PATH/web-activex.rules
# include $SO_RULE_PATH/web-client.rules
# include $SO_RULE_PATH/web-iis.rules
# include $SO_RULE_PATH/web-misc.rules
# Event thresholding or suppression commands. See threshold.conf
include threshold.conf
you are try to use snort file version 2 in snort version 3, change

Dynamic HTTP URI with recipientList returns 404 on the second call

I have two step camel route workflow - both steps make POST call to the same host, but the URL and body is different. The first call returns part of the URL for the second call.
Here is the code:
// I register converter for different request types
getContext().getTypeConverterRegistry().addTypeConverters(new RequestConverter());
from("direct:two-step-flow")
.setHeader("paramId", body().method("getParamId")
.setHeader("url", "http://localhost:8080/api/${header.paramId}
.convertBodyTo(Step1Request.class)
.to("direct:call-remote-service")
.convertBodyTo(Step2Request.class) // converter sets newParamFromResponse
.setHeader("url", "http://localhost:8080/api/${header.paramId}/${body.newParamFromResponse}
.to("direct:call-remote-service")
.end();
from("direct:call-remote-service")
.marshal().json(JsonLibrary.Jackson)
.recipientList(header("url"))
.unmarshal().json(JsonLibrary.Jackson, GenericResponse.class)
.end();
First step works fine, HTTP flow is something like
httpclient.wire.header - >> "POST /api/p1 HTTP/1.1[\r][\n]"
httpclient.wire.content - >> "{"amount":1.22,"reason":"some reason","relation-id":"12345"}"
httpclient.wire.header - << "HTTP/1.1 200 OK[\r][\n]"
httpclient.wire.header - << "HTTP/1.1 200 OK[\r][\n]"
httpclient.wire.header - << "Content-Type: application/json[\r][\n]"
httpclient.wire.header - << "Transfer-Encoding: chunked[\r][\n]"
httpclient.wire.header - << "Server: Jetty(9.3.11.v20160721)[\r][\n]"
httpclient.wire.header - << "[\r][\n]"
org.apache.camel.component.http.HttpProducer - Http responseCode: 200
Second step fails with HTTP 404
httpclient.wire.header - >> "POST /api/p1/Id1 HTTP/1.1[\r][\n]"
httpclient.wire.content - >> "{"action":"CONFIRM","reason":"reason to confirm","relation-id":"12345"}"
org.apache.camel.component.http.HttpProducer - Http responseCode: 404
httpclient.wire.content - << "<html>[\n]"
httpclient.wire.content - << "<head>[\n]"
httpclient.wire.content - << "<meta http-equiv="Content-Type"
content="text/html;charset=ISO-8859-1"/>[\n]"
httpclient.wire.content - << "<title>Error 404 </title>[\n]"
httpclient.wire.content - << "</head>[\n]"
httpclient.wire.content - << "<body>[\n]"
httpclient.wire.content - << "<h2>HTTP ERROR: 404</h2>[\n]"
httpclient.wire.content - << "<p>Problem accessing /api/p1/Id1. Reason:[\n]"
httpclient.wire.content - << "<pre> Not Found</pre></p>[\n]"
httpclient.wire.content - << "<hr />Powered by Jetty:// 9.3.11.v20160721<hr/>[\n]"
httpclient.wire.content - << "</body>[\n]"
httpclient.wire.content - << "</html>[\n]"
The same POST works with curl:
curl 'http://localhost:8080/api/p1/Id1' -i -X POST -H
'Accept:application/json' -H 'Content-Type: application/json' -d
'{
"action" : "CONFIRM",
"relation-id" : "12345",
"reason" : "reason to confirm"
}'
HTTP/1.1 200 OK
Content-Type: application/json
Transfer-Encoding: chunked
Server: Jetty(9.3.11.v20160721)
I might be misusing .recipentList, any help appreciated.
Thanks
Maybe its some HTTP response headers that are picked up and used for the 2nd call. Therefore try removing the HTTP headers between the 2 calls: http://camel.apache.org/how-to-remove-the-http-protocol-headers-in-the-camel-message.html
Add
.removeHeaders("CamelHttp*")
Before you call the route with the recipient list
.to("direct:call-remote-service")

Resources