I have written a basic Telnet Server in C language, and I am testing it against some telnet clients.
However, depending on the client I use (PuTTY, for example) then the client acts as Active by default.
Is there a way for the server to always mandate the client to act in passive mode?
I am asking this because, in my understanding, if the server mandates it, then I do not need to implement any protocol-specific details, only the basics to handle this mandatory passive mode. So, my Telnet Server will be at the end a simple TCP socket.
Update 1:
Here I describe the data received when a client connects to the server:
Launch the Telnet Server
Telnet Client Connects (and sends telnet protocol data)
received = {ffff1fffff20ffff18ffff27ffff01ffff03ffff03}
The data is hex encoded, and by decoding it according to the Telnet specification, the client is trying to establish some configuration before any data starts to be exchanged.
On the Telnet Server side, I simply ignore received data starting with 0xff (IAC code).
That solves the problem for a while.
Type some data on the Client side and hit enter
Telnet Server receives the data
Type some data on the Client side and hit enter
Telnet Server receives the data + telnet protocol data
Here is the received data (I sent "123456"):
received = {616263646566ffff18ffff27ffff01ffff03ffff03}
As you can see, I received what I sent, but I also got some telnet protocol stuff. That is because the Telnet Client is in Active mode.
I wish that at some intermediate step between 2 and 3, the Telnet Server could set the Telnet Client to run in Passive mode, so, I would not receive any Telnet Protocol data at step 6.
Update 2:
As requested in the comments for more details about the "Passive mode", I mean the Telnet negotiation mode: (1) Passive mode: where the Telnet Client/Server will not send any negotiation data, only user data; (2) Active mode: where the Telnet Client/Server will send the negotiation data to configure/handshake what are the features to be used/set.
And yes, I also could not find that to be specific to the protocol, only on PuTTY documentation
TL;DR: The Telnet protocol and well-known options offer no support for what you describe.
I mean the Telnet negotiation mode: (1) Passive mode: where the Telnet Client/Server will not send any negotiation data, only user data; (2) Active mode: where the Telnet Client/Server will send the negotiation data to configure/handshake what are the features to be used/set.
The Telnet specifications do not define a "passive mode" by any name. There is no defined way for one side to request that the other refrain from attempting to negotiate protocol options, much less to refrain from sending any protocol commands at all.
On the Telnet Server side, I simply ignore received data starting with 0xff (IAC code).
There are many more Telnet commands than those related to option negotiation. Ignoring all Telnet commands makes your server not only impolite, but deficient. You reference PuTTY's "passive mode", but even in that mode, PuTTY still emits Telnet commands for purposes other than option negotiation, and it likely still performs option negotiation too, albeit after allowing the server to negotiate first.
Additionally,
Option negotiation and other protocol commands are asynchronous, which is why the approach you describe doesn't stall communication altogether, but that does not make it valid to ignore protocol commands. The server should emit a response to each option negotiation command received, even if that response is negative.
And that will make it less likely for clients to make renewed attempts to negotiate the same options, but the server cannot say "never" to any option request, only "no".
Although it is not obligated to accept requests or offers to enable non-default options, the server is obligated to honor requests to disable options:
Clearly, a party may always refuse
a request to enable, and must never refuse a request to disable some option
(RFC 854, p.2)
The only way for a Telnet endpoint to receive a data byte with value 255 (decimal) is via the IAC mechanism.
If a Telnet endpoint does not respond to AYT commands, then the other endpoint may sometimes conclude that the session has been dropped when in fact it is still active.
A Telnet endpoint that ignores protocol commands does not support standard, expected terminal operations including break signals, interrupt process signals, erase character and erase line commands, and terminal synch. If yours is a special-purpose Telnet implementation then perhaps some of these do not actually require any server-side action, but the synch, at least, is about the data stream between server and client, and I don't see any way that a conforming Telnet implementation can fail to provide for it.
Related
when a client connects to my server side, after they connect if they switch to a VPN or something the server side still says the socket is alive and still tries to read from it i tried using another thread to check all my sockets constantly with read and close it if it returns -1 but it still doesn't do anything
It very depends on what type of protocol you use, but generalized question is : yes and no. You have to learn network protocol stack to know what you csn do in you situations, details of which you did not disclose.
Usual way to solve this problem is establish some policy or two way cpmmunocation. E.g. there was np data or "i'm alive" message send from client X for duration of time Y, we close connection. Or, send a regular "ping" message to client C and expect a response before period Y expires.
If we're talking TCP, and if the client's connection is properly closed, a message is sent to the server, so the server will know the connection is closed, so read/recv will return 0 bytes indicating EOF.
But you're asking about the times when the client becomes unable to communicate with the server. Detecting an absence of messages is necessarily done using a timeout.
You can have the server "ping" the client (send a message to which the client must respond) periodically.
You can have the client send a message periodically (a "heartbeat") when idle.
Either way, no message (of any kind) for X seconds indicates a broken connection.
If you enable the SO_KEEPALIVE socket option on each new TCP connection, the OS will automatically ping the remote side periodically to see if it still responds, and close the connection if it doesn't. The default timeout is several hours, but many OSes allow you to configure a lower timeout on a per-socket basis. Unfortunately, each one is different in how to do this. Linux, for example, uses the TCP_KEEPIDLE socket option. NetBSD (And probably other BSDs) uses TCP_KEEPALIVE. And so on.
I'm trying code TCP server in C language. I just noticed accept() function returns when connection is already established.
Some clients are flooding with random data some clients are just sending random data for one time, after that I want to close their's current connection and future connections for few minutes (or more, depends about how much load program have).
I can save bad client IP addresses in a array, can save timings too but I cant find any function for abort current connection or deny future connections from bad clients.
I found a function for windows OS called WSAAccept that allows you deny connections by user choice, but I don't use windows OS.
I tried code raw TCP server which allows you access TCP packet from begin including all TCP header and it doesn't accept connections automatically. I tried handle connections by program side including SYN ACK and other TCP signals. It worked but then I noticed raw TCP server receiving all packets in my network interface, when other programs using high traffic it makes my program laggy too.
I tried use libnetfilter which allows you filter whole traffic in your network interface. It works too but like raw TCP server it also receiving whole network interface's packets which is making it slow when there is lot of traffic. Also I tried compare libnetfilter with iptables. libnetfilter is slower than iptables.
So in summary how I can abort client's current and future connection without hurt other client connections?
I have linux with debian 10.
Once you do blacklisting on packet level you could get very fast vulnerable to very trivial attacks based on IP spoofing. For a very basic implementation an attacker could use your packet level blacklisting to blacklist anyone he wants by just sending you many packets with a fake source IP address. Usually you don't want to touch these filtering (except you really know what you are doing) and you just trust your firewall etc. .
So I recommend really just to close the file descriptor immediately after getting it from accept.
There's already a question How exactly does a remote program like team viewer work which gives a basic description, but I'm interested in how the comms works once the client has registered with the server. If the client is behind a NAT then it won't have its own IP address so how can the server (or another client) send a message to it? Or does the client just keep polling the server to see if its got any requests?
Are there any open source equivalents of LogMeIn or TeamViewer?
The simplest and most reliable way (although not always the most efficient) is to have each client make an outgoing TCP connection to a well-known server somewhere and keep that connection open. As long as the TCP connection is open, data can pass over that TCP connection in either direction at any time. It appears that both LogMeIn and TeamViewer use this method, at least as a fall-back. The main drawbacks for this technique are that all data has to pass through a TeamViewer/LogMeIn company server (which can become a bottleneck), and that TCP doesn't handle dropped packets very well -- it will stall and wait for the dropped packets to be resent, rather than giving up on them and sending newer data instead.
The other technique that they can sometimes use (in order to get better performance) is UDP hole-punching. That technique relies on the fact that many firewalls will accept incoming UDP packets from remote hosts that the firewalled-host has recently sent an outgoing UDP packet to. Given that, the TeamViewer/LogMeIn company's server can tell both clients to send an outgoing packet to the IP address of the other client's firewall, and after that (hopefully) each firewall will accept UDP packets from the other client's Internet-facing IP address. This doesn't always work, though, since different firewalls work in different ways and may not include the aforementioned UDP-allowing logic.
This is a rather convoluted question, and for that I apologize. I wrote a Linux C sockets application, a basic framework for a simplistic chat server. The server is running on my laptop. The client is Telnet at the moment until I write a designated client application (that'll be more secure, hopefully). There are better applications for sending generic network data from a client end, I know, but I got interested about why a certain thing happens on one Telnet client but not another.
The first Telnet client test was on another Linux laptop. It works as expected. The next, however, was a Blackberry app called BBSSH that allows Telnet and SSH connections. I went via the Telnet option, and it works too. Except, it doesn't exactly.
The server code does the usual read call to retrieve a block of data, which gets treated as a string, i.e. a message. The former client reads until I hit enter, and then it sends one string of characters. The BB app, however, sends every single character as if I've been pressing enter after each of them, which I haven't. Obviously this is something to do with buffering, what certain clients class as a EOL from the user input, etc. I just can't pinpoint it.
To illustrate, here is the server outputting messages it's received from the clients.
First, the message from the Linux client:
client name: this is a test
Now, for BBSSH:
client name: t
client name: h
client name: i
client name: s
client name:
client name: i
client name: s
client name:
client name: a
client name:
client name: t
client name: e
client name: s
client name: t
Any help?
Telnet clients can operate in line-mode or character-mode. The BBSSH client seems to be operating in character-mode for some reason.
It's possible that your server could force the client into line-mode by sending the client an instruction to that effect during the negotiation that takes place at the start of a telnet connection.
The byte sequence your server would need to send to the client is 0x255 0x253 0x34, which translates as "Interpret As Command, Do, Linemode". If the client is willing/able to operate in line-mode, it should reply with 0x255 0x251 0x34 ("Interpret As Command, Will, Linemode").
If this is all new to you (i.e. your telnet server currently doesn't do any negotiation at all), google for terms like "telnet negotiation" or take a look at some of the relevant RFCS (RFC 854 is Telnet itself, RFC 1184 covers the Linemode option).
TCP is stream-orientated, and so there's no such thing as a "message". You might receive all the data together, or just a fragment of it at a time.
In your case, you might want to buffer anything received until you hit an EOL marker.
I have looked through many pages and forums, but still am unsure about this. I am writing a project where the client reads in a txt file of numbers and sends the numbers to the server who will do some computation and send the result back to the client. Is it possible to connect a client to multiple servers using udp? and if so, an explanation would be nice. I don't think I quite understand udp fully yet. I am writing this in c also. The reason for connecting to multiple servers from one client is because I need to run the client using 1, 2, 4, and 8 servers (distributing numbers to each server until none are left) and compare the run time. Any quick help would be appreciated.
You can use UDP to multiple servers with the same socket. Probably the simplest way to do it is to have the client assign a session ID to each connection, include the session ID in each datagram it sends, and have the server return that session ID in each reply datagram it sends. Don't use the IP address to distinguish which server the packet is from because a server can have more than one IP address, making it unreliable.
Just remember that if you use UDP, you don't get any of the things TCP adds. If you need any of them, you need to do them yourself. If you need all or most of them, TCP is a much better choice. TCP does:
Session establishment
Session teardown
Retransmissions
Transmit pacing
Backoff and retry
Out of order detection and rearrangement
Sliding windows
Acknowledgments
If you need any of these things and choose to use UDP, you need to do them yourself.