Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Has anyone dealt with the SMPP binary SMS protocol? I know this technology is still fairly widely used by the messaging aggregators and carriers, but it seems like the SMPP spec is not being updated, and support for SMPP libraries is slowly fading away. The "SMS Forum" (http://www.smsforum.net) was shut down in 2007.
To me, it feels like the protocol is dying in favor of web-service interfaces, but I was curious what other people think.
Since SMPP is used mainly by wireless operators, the answer to your question will depend a lot on what market/region/country you are dealing with.
I have experience with Latin American wireless companies, and can tell you that although more and more companies are hiding their SMPP servers behing HTTP webservices (that provide them more flexibility) the SMPP protocol is still a requirement to connect to a lot of Wireless companies, so it's definitely not dead.
And if you look inside those Wireless companies, the smpp protocol is very much alive in their internal networks and in the inter-connections with other carriers.
It's true that the SMPP spec hasn't changed in a long time, but that's not a bad thing actually. The protocol has matured, and there seems to be no interest from the carriers in expanding it to include new functionality, specially because they have found the flexibility they need in custom HTTP APIs
And regarding library implementations of SMPP, Kannel is in active development, although I wouldn't recommend it's use. Unfortunately most of the successful long-term implementations that I have seen of SMPP clients have been home grown implementations
SMPP is a good protocol for simple message sending. I hope it doesn't die in favour of any HTTP-based protocols. I agree that the HTTP protocols would provide flexibility, however, it would likely mean a fat payload based on some variant of XML or some other text protocol, which would affect greatly performance/power-usage.
As long as SMPP is guided by the specs, it should be great to use.
We are still using it, unfortunately.
We also still use it but we are repacing it with HTTP protocol for new projects!
Ricardo Reyes' answer covered (https://stackoverflow.com/a/545651/467545) this question almost completely. Just adding my own experience in this matter.
Comment on binary messages
I work for a company that runs SMPP hub. We do handle business logic that handles binary SMS. The percentage is low, but they exist. Smartphones (iPhone, for example) can create binary SMS for long messages. We are seeing some use-cases.
Comment on SMPP spec
It is has been quite a few years that the SMPP spec has been updated. I have not seen any major carrier in the US to support SMPP 5.0 spec. Almost everywhere, it's SMPP v3.4. For me, the reasons are:
SMPP v3.4 meets most requirement. Companies have found their ways around the
limitations.
The growth trend for SMS is flattening. It may not make sense to spend resources on this area. Even though SMPP v5.0 did not get much traction, no alternative is being developed.
Smart phone apps can use data plan to send SMS (not over SMPP) and bypass carriers' SMS communication channel. iPhone's iMessage is the biggest trend changer here.
Despite the declining growth trend, SMS over SMPP, being a core communication protocol, will probably continue live in the carrier space for few more decade. That's strictly my personal observation.
Comment on usages of SMPP
SMPP requires specific knowledge about the protocol, and it takes time and patience to acquire that knowledge. It probably influenced the rise of other alternatives.
I have seen that developers are leaning more and more towards HTTP based communication. The implementation is custom. I have seen:
HTTP communication using GET parameters. If synchronous acknowledgement is required, the call becomes a blocking, else a callback is used to report acknowledgement.
HTTP with using POST parameters. XML is being used to describe the SMS.
Web service
Some rarely used alternatives are:
SMTP. For sending from an entity.
IMAP. For receiving.
Although many SMS agregattors have HTTP APIs. I think SMPP is very useful when you want to do massive sendings, because it is a connected protocol.
Related
mux.com (and also agora.io and so on) is a great service, but very expensive since it's a server solution. I can't use that.
Discord is a great client solution, that just uses gateways as a pass-through to hide IP addresses and so on. They described their entire architecture here: https://discord.com/blog/how-discord-handles-two-and-half-million-concurrent-voice-users-using-webrtc Discord ain't the only one with this approach, Instagram has AFAIK the same approach too, since it's cheap and does what it does
I want to use for my social media app (like instagram) this solution too, but without these many custom built things to increase performance. I am a one-man team and I can't handle that complexity; still i don't want to use mux because it's way too expensive for me
I am okay with the stock/standard performance. Does anyone know or can point me to a tutorial, where to start building such webRTC elixier gateway solution for m:m audio/video live streams calls?
maybe there already is code published that I can just copy paste
thanks a lot!!
edit
ive got an answer on their official forum https://elixirforum.com/t/how-to-build-webrtc-m-m-audio-video-live-streams-calls-like-discord-does-client-to-client-via-gateway-for-ip-protection/44956
Discord backend use SFU to forward streams for peers in a videoroom, the description from the discord post:
Discord Voice server contains two components: a signaling component and a media relay component called the selective forwarding unit or SFU. The signaling component fully controls the SFU and is responsible for generating stream identifiers and encryption keys, forwarding speaking indication, etc.
Note that the projects in answer are written in Elixir(based on Erlang) programming language, which is not very common used in neither live streaming nor WebRTC. For example, FFmpeg, x264, libopus, WebRTC, SRS, all these audio/video components are written in C++, you'd better think about it.
For a video chat product like discord:
The client app, no doubt, could be built on WebRTC, both H5 and mobile.
For SFU server, recommend C++ server, for example, SRS or mediasoup. Because the whole audio/video economy is C++ based, there're lots of stuff to handle for SFU.
About the signaling server, also called videoroom, could be written by nodejs or Go, because it depends on your business, so highly recommend your best skilled language, there're lots of work to do in this server.
And not all peers in a video room need to publish video stream, instead they only play or consume streams, so it's actually low latency live streaming. For more information about live streaming and video chat, please read this post.
I want to try programming something which can do things across multiple end points, so that when things occur on one computer events can occur on others. Obviously the problem here is in sending commands to the other end points.
I'm just not sure of what program I would use to do this with, I'm guessing it would have to use an API which uses some kind of client server model. I expect there are things that people use to do this with but I don't know what they are called.
How would I go about doing this? Are there common APIs which allow people to do this?
There are (at least) two types to distinguish between: RPC APIs and Message Queues (MQ)
An RPC-style API can be imaged like an remotely callable interface, it typically gives you one response per request. Apache Thrift1) is one of the frameworks designed for this purpose: An easy to use cross-platform, cross-language RPC framework. (And oh yes, it also supports Erlang, just in case ...). There are a few others around, like Googles protocol buffers, Apache Avro, and a few more.
Message Queuing systems are more suitable in cases where a more loosely coupling is desired or acceptable. In contrast to an RPC-style framework and API, a messaging queue decouples request and response a bit more. For example, a MQ system is more suitable for distributing work to multiple handlers, or distributing one event to multiple recipients via producer/consumer or publish/subscribe patterns. A typical candidate could be MSMQ, ApacheMQ or RabbitMQ
Although with RPC this can be achieved as well, it is much more complicated and involves more work as you are operating on a somewhat lower abstraction level. RPCs shine when you need more the request/response style and value performance higher than the comfort of an MQ.
On top of MQ systems there are more sophisticated Service Bus systems, like for example NServiceBus. A service bus operates on an even higher level of abstraction. They also have their pro's and con's, but can be helpful too. At the end, it depends on your use case.
1) Disclaimer: I am actively involved in that project.
Without more information, I can just suggest you to look at Erlang. It is probably the easiest language to learn distributed systems, since sending messages is built into the language and it is irrelevant for the language and command itself, if the message is sent within the same PC, LAN or through the internet to a different machine.
I have an idea for a mobile service based project. I have read some stuff online, including the following tutorial: SMS Tutorial and find it to be pretty helpful but I have some basic questions so please bear with me.
I run a small (as in me and a friend) company and want to setup a situation where people can text a number and receive information back, or setup on my website that they receive text messages letting them know its time to do something, or "tech support" can text them if they wish, etc.
So from what I've gathered, I can use Kannel as my "SMS gateway" interacting with a GSM Modem that I can purchase. For this modem I can buy a texting plan SIM. I can then setup Kannel to use my GSM modem as a virtual smsc. So, users can text that SIMs phone number, which will go to the modem and be interpreted by Kannel. My application will only have to interact with Kannel. And in the future, if I decide I need more texting throuput and upgrade to a real SMSC my application does not need to change.
Is there anything I'm missing/misunderstanding?
Thanks!
Using Kannel as a SMS gateway is a good option for a small company. It does come with a lot of headaches as you have to build, configure, maintain, etc.. all of the services you need, This is what everyone is referring to as "A lot of work".
What you are looking to do is use the GSM modem as a Long Code (verses short Code) for text messaging.
I think this is an expectable solution for something small and where service, latency and availability might not be as important if it's for a local region. But if this is something that needs to be reliable I would think about getting a short code (or sharing a short code) or just a SMS Messaging service with No Long/Short Code (See Twilio Below).
Also if you're trying to rollout your own service there are some things to consider with the SMSC's. If your Kannel/GSM Modem doesn't support the Carrier, you would have to reach out to that Carrier and connect to thier SMSC. This is a hefty price to connect to a Carrier. This is way Aggregators are appealing as they Have all the Carrier connections and pay those fees.
As you transitioning from Kannel to a Gateway Service Provider, that's another headache as you would need to start from scratch and use whatever the service provider API is and replace the Kannel/GSM altogether. Your workflow might be the same but how you send and receive messages with differ greatly. Most (if not all) Aggregators will offer there own version of a SDK/API/Service that you would need to comply with to use their service.
If it's in the US there are some other options you might consider:
Twilio, this is the simplest solution I've seem for smaller companies looking for SMS functionality. Now they are currently offering SMS Short Codes by trial but if you need a short code I would go with a true messaging aggregator.
Zeep Mobile Offers a free SMS service with a Short Code, but they do send Ads with all your SMS Messages. This is a great way to subsidize costs if the Ads don't bother you. Not sure if you can pick the types of ads you want but it's another option for a service.
Clickatell offers a service where you can Share a Short Code and use Keywords to filter your SMS traffic to your account. This is another way to cut some costs if you're funds and traffic (how much SMS you send and receive) are limited
OpenMarket offers a full service SMS/MMS global platform, this is who you want if you doing Mass amounts of trafic and/or need to reach globally.
Note: These are just a few services as there are many, many more
There are also some caveats with having a Short Code as you will need to register a new Short Code if the country you are services needs it's own Short Code. Example: you can use your US Short Code to service Canada, You would also need a Canadian Short Code as well. This can get costly if your only doing small amounts of traffic.
I think you have got basic considerations. John is right, and using a sms gateway is better idea, you will get better reliability and thoughput. And you could get slow prices.
I have a need of implementing two apps that will exchange data with each other. Both apps will be running on separate PCs which are part of a LAN.
How we can do this in Delphi?
Is there any free component which will make it easy to exchange data between apps across PCs?
If I'm writing it myself, I (almost) always use sockets to exchange data between apps.
It's light weight, it works well on the same machine, across the local network or the Internet with no changes and it lets you communicate between apps with different permissions, like services (Windows messages cause problems here).
It might not be a requirements for you, but I'm also a fan of platform independent transports, like TCP/IP.
There are lots of free choices for Delphi. Here are a few that I know of. If you like blocking libraries, look at Indy or Synapse. If you prefer non-blocking, check out ICS.
Before you choose a technique, you should characterize the communication according to its throughput, granularity, latency, and criticality.
Throughput -- how much data per unit time will you need to move? The range of possible values is so wide that the lowest-rate and highest-rate applications have almost nothing in common.
Granularity -- how big are the messages? How much data does the receiving application need before it can use the message?
Latency -- when one aplication sends a message, how soon must the other application see it? How quickly do you want the receiving application to react to the sending application?
Criticality -- how long can a received message be left unattended before it is overrun by a later message? (This is usually not important unless the throughput is high and the message storage is limited.)
Once you have these questions answered, you can begin to ask about the best technology for your particular situation.
-Al.
I used to use Mailslots if I needed to communicate with more than one PC at a time ("broadcast") over a network, although there is the caveat that mailslots are not guaranteed.
For 1-to-1, Named Pipes are a Windows way of doing this sort thing, you basically open a communication channel between 2 PCs and then write messages into the pipe. Not straight forward to start with but very reliable and the recommended way for things like Windows Services.
MS offer Named Pipes as an alternative way of communicating with an SQL Server (other than TCP/IP).
But as Bruce said, TCP/IP is standard and platform independent, and very reliable.
DCOM used to be a good method of interprocess communication. This was also one of Delphis strong points. Today I would strongly advice against using it.
Depending on the nature of your project I'd choose either
using a SQL server
socket communication
Look at solutions that use "Remote Procedure Call" type interfaces. I use RemObjects SDK for this sort of thing, but there are open source versions of RealThinClient which would do just as well.
Both of these allow you to create a connection that for most of your code is "transparent", and you just call an interface which sends the data over the wire and gets results back. You can then program how you usually do, and forget the details of sockets etc.
This is one of those cases where there really isn't a "best" answer as just about any of the technologies already discussed can be used to accurately communicate between two applications. The choice of which method to use will really come down to the critical nature of your communication, as well as how much data must be transfered from one workstation to another.
If your communication is not time sensitive or critical, then a simple poll of a database or file at regular intervals might be sufficient. If your communication is critical and time sensitive then placing a TCPIP server in each client might be worth pursuing. If just time sensitive then mailslots makes a good choice, if critical but not time sensitive then named pipes.
I've used the Indy library's Multicast components (IdIPMCastClient/Server) for this type of thing many times. The apps just send XML to each other. Quick and easy with minimal connection requirements.
Probably the easiest way is to read and write a file (or possibly one file per direction). It also has the advantage that it is easy to simulate and trace. It's not the fastest option, though (and it definitely sounds lame ;-) ).
A possibility could be to "share" objects across the network.
It is possible with a Client-Server ORM like our little mORMot.
This Open Source libraires are working from Delphi 6 up to XE2, and use JSON for transmission. There is some security features included (involving a RESTful authentication mechanism), and can use any database - or no database at all.
See in particular the first four samples provided, and the associated documentation.
For Delphi application integration, a message oriented middleware might be an option. Message brokers offer guaranteed delivery, load balancing, different communication models and they work cross-platform and cross-language. Open source message message brokers include:
Apache ActiveMQ and ActiveMQ Apollo
Open Message Queue (OpenMQ)
HornetQ
RabbitMQ
(Disclaimer - I am the author of Delphi / Free Pascal client libraries for these servers)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
This might be more of a serverfault.com question but a) it doesn't exist yet and b) I need more rep for when it does :~)
My employer has a few hundred servers (all *NIX) spread across several locations. As I suspect is common we don't really know how many servers we have: more than once I've been surprised to find a server that's been up for 5 years, apparently doing nothing but elevating the earth's temperature slightly. We have a number of databases that store bits of server information -- Puppet, Cobbler, Nagios, Cacti, our load balancers, DNS, various internal spreadsheets and so on but it's all very disparate, incomplete and overlapping. Maintaining this mess costs time and money.
So, I'd like to come up a single database which holds details of what each server is (hardware specs, role, etc) and replaces (or at least supplies data for) the databases mentioned above. The database and web interface are likely to be a Rails app as this is what I have most experience with. I'm more of a sysadmin than a coder.
Has this problem already been solved? I can't find any open source software that really fits the bill and I'm generally not too keen on bloaty, GUI vendor-supplied solutions.
How should I implement the device information collection bit? For instance, it'd be great to the database update device records when disks are added or removed, or when the server serial number changes because HP replace the board. This information comes from many different sources: dmidecode, command-line disk tools, SNMP against the server or its onboard lights-out card, and so on. I could expose all this through custom scripts and net-snmp, or I could run a local poller that reported the information back to the central DB (maybe via a RESTful interface or something). It must be easily extensible.
Have you done this? How? Tell me your experiences, discoveries, mistakes and recommendations!
This sounds like a great LDAP problem looking for a solution. LDAP is designed for this kind of thing: a catalog of items that is optimized for data searches and retrieval (but not necessarily writes). There are many LDAP servers to choose from (OpenLDAP, Sun's OpenDS, Microsoft Active Directory, just to name a few ...), and I've seen LDAP used to catalog servers before. LDAP is very standardized and a "database" of information that is usually searched or read, but not frequently updated, is the strong-suit of LDAP.
My team have been dumping all out systems in to RDF for a month or two now, we have the systems implementation people create the initial data in excel, which is then transformed to N3 (RDF) using Perl.
We view the data in Gruff (http://www.franz.com/downloads.lhtml) and keep the resulting RDF in Allegro (a triple store from the same guys that do Gruff)
It's incredibly simple and flexible - no schema means we simply augment the data on the fly and with a wide variety of RDF viewers and reasoning engines the presentation options are enless.
The best part for me? no coding, just create triples and throw them in the store then view them as graphs.
The collection of detailed machine information is a very frustrating problem (many vendors want to keep it this way). Even if you can spend a large amount of money, you probably will not find a simple solution to this problem. IBM and HP offer products that achieve what you are seeking, but they are very, very, expensive, and will leave a bad taste in your mouth once you realize that probably all you needed was 40-50% of the functionality they offer. You say that you need to monitor *Nix servers...most (if not all) unices support RFC 1514 (windows also supports this RFC as of windows 2000). The Host MIB support defined by RFC 1514 has its drawbacks however. Since it is SNMP based, it requires that SNMP be enabled on the machine, which is typically not the default for unix and windows machines. The reason for this is that SNMP was created before the entire world was using the Internet, and thus the old, crusty nature of its security is of concern. In many environs, this may not be acceptable for security reasons. However, if you are only dealing with machines behind the firewall, this might not be an issue (I suspect this is true in your case). Several years ago, I was working on a product that monitored hundreds of unix and windows machines. At the time, I did extensive research into the mechanics of how to acquire detailed information from each machine such as disk info, running processes, installed software, up-time, memory pressure, CPU and IO load (Including Network) without running a custom client on each machine. This info can be collected in a centralized fashion. As of three or four years ago, the RFC-1514 Host MIB spec was the only "standard" for acquiring detailed real-time machine info without resorting to OS-specific software. Sun and Microsoft announced a WebService based initiative many years ago to address some of this, but I suspect it never received any traction since I cannot at the moment even remember its marketing name.
I should mention that RFC 1514 is certainly no panacea. You are at the mercy of the OS-provided SNMP service, unless you have the luxury of deploying a custom info-collecting client to each machine. The RFC-1514 spec dictates that several parameters are optional, and if your target OS does not implement it, then you are back to custom code to provide the information.
I'm contemplating how to go about this myself, and I think this is one of the key pieces of infrastructure that not having around keeps us in the dark ages. Hopefully this will be a popular question on serverfault.com. :)
It's not just that you could install a single tool to collect this data, because that's not possible cheaply, but ideally you want everything from the hardware up to the applications on the network feeding into this thing.
I think the only approach that makes sense is a modular one. The range of devices and types of information is too disparate to come under a single tool. Also the collection of data needs to be as passive and asynchronous as possible - the reality of running infrastructure means that there will be interruptions and you can't rely on being able to get the data at all times.
I think the tools you've pointed out form something of an ecosystem that could work together - Cobbler can install from bare-metal and hand over to Puppet, which has support for generating Nagios configs, and storing configs in a database; for me only Cacti is a bit opaque in terms of programmatically inserting new devices, templates etc. but I know this is possible.
Ultimately you have to sit down and work out which pieces of information are important for the business you work for, and design a db schema around that. Then, work out how to get the information you need into the db, whether it's from Facter, Nagios, Cacti, or direct snmp calls.
Since you asked about collection of data, I think if you have quite disparate kit (Dell, HP etc.) then it makes sense to create a library to abstract away as much as possible the differences between them, so your scripts just make standard calls such as "checkdiskhealth". When you add new hardware you can add to the library rather than having to write a completely new script.
Sounds like a common problem that larger organizations would have. I know our (50 person company) sysadmin has a little access database of information about every server, license, and piece of hardware installed. He's very meticulous, but when it comes time to replace or repair hardware, he knows everything about it from his little db.
You and your organization could sponsor an open source project to get oyu what you need, and give back to the community so that additional features (that you may not need now) can be developed at no cost to you.
Maybe a simple web service? Just something that accepts a machine name or IP address. When the service gets input, it sticks it in a queue and kicks off a task to collect the data from the machine that notified it. The nature of the task (SNMP interrogation, remote call to a Perl script, whatever) could be stored as part of the machine information in the database. If the task fails, the machine ID stays in the queue and the machine is periodically re-polled until the information is collected. Of course, you also have to have some kind of monitor running on your servers to notice that something has changed and send the notification; hopefully this is easily accomplished with whatever server monitoring software you've already got in place.
There are some solutions from the big vendors for managing monstrous sets of machines - such as some of the Tivoli stuff from IBM. That is probably, however, overkill for mere hundreds of machines.
There are some free software server database solutions but I do not know if they provide hooks to update information automatically from the machines with dmidecode or SNMP. One I heard about (but no personal experience, sorry), is GLPI.
I believe you are looking for Zabbix. It's open source, easy to install and use.
I've installed for a client a few years ago, and if I remember right it has a client application that connects to the zabbix server to update it with the requested information.
I really recommend it: http://www.zabbix.com
Checkout Machdb Its an opensource solution to the problem you are describing.