Pci- dss section 10 - pci-dss

There are plenty of log reporting tool but I am having trouble on choosing.
Can anyone advice me a tool for audit-log monitoring?

This is one of those questions where there is no right answer on which tool is best - you should use whatever tool works for your organisation.
There are plenty of both commercial and open source tools available which can be used to meet the PCI logging requirements. Having said that, the open source tools that I've looked at recently don't cover all PCI requirements in the basic versions so make sure y ou find out about that. The same goes for commercial tools, some will see meeting PCI as a feature that is only available in their higher tier offerings.
Key requirements to look at are:
options for gathering logs (push/pull/agent etc)
storage options - you need at least 3 months available online and a year available in total
review capabilities - looking at logs every day isn't practical so what alerts can you set up for this to be automated?
protection of logs - some sort of integrity monitoring is required (req 10.5.5) to ensure logs aren't altered unknowingly
Your QSA may well give you extra bonus marks if your logging tool automatically feeds into your incident response process - worth it if it increases their confidence in your processes!

Related

JitterBit vs Dell Boomi vs Celigo

We've narrowed our selection for an ipaas down to the above 3.
Initially we're looking to pass data from a cloud based HR system to Netsuite, and from Netsuite to Salesforce, and sometimes JIRA.
i've come from a Mulesoft background which I think would be too complex for this. On the other hand it seems that Celigo is VERY drag and drop, and there's not much room for modification/customisation.
Of the three, do you have any experience/recommendations? We aren't looking for any code heavy custom APIs, most will just be simple scheduled data transfers but there may be some complexity within the field mapping, and we want to set ourselves up for the future.
I spent a few years removing Celigo from NetSuite and Salesforce. The best way I can describe Celigo is that it is like the old school anti-virus programs which were often worse than the viruses... lol... It digs itself into the end system, making removing it a nightmare.
Boomi does the job, but is very counter-intuitive, and overly complex. You can't do everything from one screen, you can't easily bounce back and forth between tasks/operations/etc. And, sometimes it is very difficult to find where endpoints are used, as they are not always shown in their "where is this used" feature. Boomi has a ton of endpoint connectors pre-built (the most, I believe), but I have not seen an easy way to just create your own. Boomi also has much more functionality than just the integrations, if that is something that may be needed.
Jitterbit, my favorite, is ridiculously simple to use. You can access everything from one main screen, you can connect to anything (as long as it can reach out to the network, or you can reach it via the network - internal or external). Jitterbit has a lot of pre-built endpoint connectors. It is also extremely easy to just create a connection to anything you want. The win with Jitterbit is that it is super easy to use, super easy to learn, it always works, they have amazing support (if you need it). I have worked with Jitterbit the most (about 6 years), and I have never been unable to complete an integration task in less that a couple of day, max.
I have extensive experience with Dell Boomi platform but none with JitterBit or Celigo. Dell Boomi offers very versatile and well supported iPaaS solution. The technical challenges of Boomi are some UI\usability issues (#W3BGUY mentioned the main ones) and the lack of out-of-the-box support for CI/CD and DevOps processes (code management, versioning, deployments etc.)
One more important component to consider here is the pricing of the platform. Boomi does charge their clients yearly connection prices. Connection is defined as a unique combination of URL, username and password. The yearly license costs vary and can range anywhere between ($1,000 - $12,000) per license per year. The price depends greatly on your integration landscape and the discounts provided so I would advise on engaging with vendor early to understand your costs. Would be great to hear from others on pricing for JitterBit and Celigo.
Boomi is also more than just an iPaaS platform. They offer other modules of their platform to customers: API Management, Boomi Flow (workflow and automation module), Master Data Hub (master data management). Some of these modules are well developed and some are in their infancy (API Management).
From my limited experience with MuleSoft platform, I share the OP's sentiments about it being too complex for simple integrations. They do provide great CI/CD and DevOps functionality though if that is something that is needed.
There is not a simple answer to a question like this. One needs to look at multiple aspects of the platform and make a decision based on multitude of factors. I would advise looking at Gartner and Forrester reports for a general guidelines and working out the pricing (initial and recurring) with the vendor.
I have only used Jitterbit, so can only comment on that. It works fine. It is pretty intuitive and easy to use, but does have some flexibility with writing your own queries, defining and mapping file formats, and choosing different transfer protocols.
I've only used the free version (which you need to host somewhere and also is not supported) and it was good enough for production tasks. If you have the luxury of time, I'd say download it and try it out. If it works for you, throw it on a server or upgrade to the cloud version.
One note: Jitterbit uses background services. If you run it locally and then decide to migrate your account to a server, you need to stop those services on your local. Otherwise, it will try to run jobs from both locations and that doesn't turn out well.
Consider checking out Choreo as well. It has a novel simultaneous code + low-code approach for integration development. And provides rich AI support for performance monitoring, debugging, and data mapping.
Disclaimer: I'm a member of the project.

NServiceBus & ServiceInsight Sql Server Transport & Persistence

The application we have been building is starting to solidify in that the majority of the functionality is now in place. This has given us some breathing room and we are starting evaluate our persistence model and the management of it. I guess you could say the big elephant in the room is RavenDB. While we functionally have not experienced any issues with it yet, we are not comfortable with managing it. Simple tasks such as executing a query, truncating a collection, etc, are challenging for us as we are new to the platform and document based NoSql solutions in general. Of course we are capable of learning it, but I think it comes down to confidence, time, and leveraging our existing Sql Server skill sets. For example, we pumped millions of events through the system over the course of a few weeks and the successfully processes message were routed to our Audit queue in MSMQ. We also had ServiceInsight installed and it processed the messages in the Audit queue, which chewed up all the disk space on the server. We did not know how to fix this and literally had to delete the Data file that we found for RavenDB. Let me just say, doing that caused all kinds of headaches.
So with that in mind, I have been charged with evaluating the feasibility and benefits of potentially leveraging Sql Server for the Transport and/or Persistence for our Service Endpoints. In addition, I could use some guidance as well for configuring ServiceControl and ServiceInsight to leverage Sql Server. Any information you might be able to provide regarding configuring these and identifying any draw backs or architectural issues that we should consider would be greatly appreciated.
Thank you, Jeffrey
Using SQL persistence requires very little configuration (implementation detail), however, using SQL transport is more of an architectural decision then an infrastructure one as you are changing to a broker style architecture, that has implications you need to consider before going down that route.
ServiceControl and ServiceInsight persistance:
Although the ServiceControl monitors MSMQ as the default transport, you can use ServiceControl to support other transports such as RabbitMQ, SqlServer as well, Here you can find the details of how to do that
At the moment ServiceControl relies on RavenDb for it's persistence and it is not possible to change that to SQL as ServiceControl relies on Raven features.(AFIK)
There is an open issue for expiring data in ServiceControl's data, see this issue in github
HTH
Regarding ServiceControl usage of RavenDB (this is the underlying service that serves the data to ServiceInsight UI):
As Sean Farmar mentioned (above), in the post-beta releases we will be including message expiration, and on-demand audited message deletion commands so that you can have full control of the capacity utilization of SC.
You can also change the drive/path of the ServiceControl database location to allow it to use a larger drive.
Note that ServiceControl (and ServiceInsight / ServicePulse that use it) is intended for analysis, debugging and operational monitoring. Its intended to store a limited amount of audited data (based on your throughput and capacity needs, this may vary significantly when counted as number of messages, but the database storage capacity can be up to 16TB).
If you need a long term storage for audited data, you can hook into ServiceControl HTTP API and transfer the messages' data into various long-term / unlimited-size / low-cost storage solutions (e.g. http://aws.amazon.com/glacier).
Please let us know if this answers your needs and whether you have additional questions
Danny.

Named user plus, what is this? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I was looking at oracle liecense, it looks cheap for named user plus. I mean if I develop a web application in which user has no interaction with database other than registering and logging in and if I make a virtual user inside server to do all these things that is get user name and password from users ect. keep them in queue and execute database commands one by one. Will I need more than one named user plus for this, I am total noob in oracle and web field , i m just a designer who is learning server side technologies so if this question is invalid please let me know why.
Named user licensing is not the best option in this situation - Oracle considers the web application a multiplexing device and will require you to track the users of the application and purchase a named user license for each of them.
[Edit]
I see that you've received some good additional licensing information in the other answers, but in short an Oracle schema != an application user. Years ago I was unlucky enough to be the POC for an unwelcome audit by Oracle and for our intranet application I was required to report distinct IP addresses connecting to the application from the web server.
Oracle licensing is a labyrinth which few people understand. Even most Oracle employees won't discuss it because it's so complicated. In fact there are almost as many consultants making a living from offering licensing advice as there are from tuning the actual databases.
So the following is just an opinion, and you definitely should not use it as the basis of a business plan.
If your web application is for an intranet you could purchase a Named User Plus license, because you should be able to identify each and every user of your application. But if your application is going on the Internet with an unknown and unknowable userbase you will need to buy Per Processor licenses.
Oracle has a complicated mechanism for licensing multi-core processors. It very much depends on which platform and type of chip we're using. It is an area of licensing which Oracle revises on a regular basis, as they try to come to terms with multi-core CPUs. It used to be that pretty much everything was 0.75; as Zendar points out, it is now the case that many configurations are licensed at 0.5 per core. Oracle always round up, so if we have a single dual-core CPU which attracts a 0.75 per core multiplier it will still cost us two Per Processor licenses, but a quad-core will only cost three. Find out more.
One thing to bear in mind is that if you application has quite lightweight DB requirements - that is, less than 4GB of application data, suitable to run on a single CPU (single core) - you can use the Express Edition for free, for any purpose.
One more thing: licenses apply to all databases, not just those in produvction. So you need to factor in the cost of licensing your development and test environments as well.
With regards to that last point Zendar cites the OTN Download license. That outlines what we can do with products we have downloaded from OTN. The problem with the OTN Download license is made clear in Oracle's explanation of Database Licensing:
"This limited license gives the user
the right to develop, but not to
deploy, applications using the
licensed products. It also limits the
use of the downloaded product to one
person, and limits installation of the
product to one server."
So: if we're a one-man operation (no dog) we can develop an application using the OTN Download license. But if we want a team of developers sharing a database we need a Full Use license. And once we're supporting an application in production we need a Full Use license for the maintenance (formerly development) environment.
The other consideration is this: if we want support and patches for our development environment then we need a proper license.
I said it was a labyrinth.
Oracle for a long time have price list published on its website. So, there is no any secret there.
There you will find their definition of "Named user plus".
Short interpretation: named user plus is every individual and/or device that accesses database.
You can buy per processor license or per named user, pick one that suites you better (be careful with processor license - Oracle have some formula for counting processor cores - check price list and Oracle Processor Core Factor Table)
Regarding APC-s answer - all Intel and AMD chips have core factor 0.5 - meaning 1 processor license per 2 cores.
Development license for Oracle RDBMS products states:
We grant you a nonexclusive, nontransferable limited license to use the programs only for the purpose of developing, testing, prototyping and demonstrating your application, and not for any other purpose.
So, you can download Oracle product and use it for developing, testing, prototyping and demonstrating your application. Well, not really. See below edit.
Disclaimer: I am not and have never been Oracle employee or Oracle reseller. Information here is my interpretation of documents freely available on Oracle website. I worked with Oracle products, they are far from perfect, but anyway I don't like misinformations especially if correct information is available.
Edit:
RE APC's comment:
Yep. You are right. It's restrictive as you wrote in your answer.
I reread license agreement. Few sentences after the one I quoted above says:
The programs may be installed on one computer only, and used by one person in the operating environment identified by us.
So, OTN development licence is practically useless for majority of developers.

ASP.NET - What is the best way to block the application usage?

Our clients must pay a monthly Fee... if they don't, what is the best way to block the asp.net software usage?
Note: The application runs on the client own server, its not a SaaS app...
My ideas are:
Idea: Host a Web Service on the internet that the application will use to know if the client can use the software.
Issue 1 - What happen if the client internet fails? Or the data center fails?
Possible Answer: Make each web service access to send a key that is valid for 7 or 15 days, so each web service consult will enable the software to run more 7 or 15 days, this way the application will only be locked after 7 or 15 days without consulting our web service.
Issue 2 - And if the client don't have or don't want to enable internet access to the application?
Idea 2: Send a key monthly to the client.
Issue - How to make a offline key?
Possible Answer: Generate a Hash using the "limit" date, so each login try on software will compare the today hash with the key?
Issue 2 - Where to store the key?
Possible Answer: Database (not good, too easy to change), text file, registry, code file, assembly...
Any opinion will be very appreciated!
Ah, the age old issue of DRM. And that's what you're talking about here. Frankly, the fundamental answer to your question is: you can't. No matter what you do to the system, it can be hacked and modded in such a way that your DRM authentication scheme can be bypassed and/or broken.
This is a fundamental fact of software development: it can and will be pirated.
So, the answer to your question is that you will have to trust the client to pay you the fees you determine to be correct (which is the whole point of contracts in this situation).
Any other actions you take are a hardship and annoyance on your paying customer, and has the potential to erode your customer base.
Now, if you want control of your software in the nature described, then do not provide it to users to run on their own servers. Force them to be SaaS. In that way, you control all of that. But this is the only way.
Something that you don't appear to be thinking about, but I have seen networks which do not allow any type of "dial home" solutions, as a majority of the systems were internally focused and thus these internal servers were NOT allowed to contact the outer internet. At all. It was deemed a security risk to even allow them access. How would you handle those networks?
Frankly, if I was the customer, and I paid my fees to license your software (which I installed on my own device) I would be irate if I had to allow that device access to the internet in order for it to work. Doubly so, if the software in question was any type of financial management, customer management, HR management, quality management, inventory management, sales, or just anything related to my business, customers or employees. I don't trust software developers enough to have their software talk to something else when my business-relevant data is held in their software.
In the end, what you are describing is an antagonistic approach to take with your paying customers. If you don't believe me, look at the comments that UbiSoft is getting for their latest customer-hating DRM scheme.
IMO, you have two good paths here:
Go SaaS
Ensure your contract has a
bite for non-payment
usually you provide an scrambled key that includes a valid authorization token and the expiration date through which service is paid. Then the installer will use this to "activate" your software. Not sure how this would be viewed if you have 1-2 week periods. you'd want to warn them about upcoming expiration. Also not sure how to tell if they've set their own clock back.
In short, nothing will be perfect.
I've dealt with this before and its not possible to make a perfect system. There are risks in anything you do. The best thing is to weigh your options, and determine the method that has the least likelihood of being hacked and the most likelihood of working correctly and easily for the customer.
Like others have said, they could change their clock and invalidate the license checking mechanism. If you didn't trust the user, you could make the license system connect to your servers. You would then need to ensure that they always have a connection to your servers to check the license.
What if there is a valid reason that they cannot access your server?
Their internet connection has a problem.
YOUR internet connection has a problem.
In that case, should you disable the application? Probably not. But then again, what if they shut down the connection on purpose? Then you would WANT to disable the application.
If you give them a monthly key, you're adding a monthly annoyance and you may lose a customer after a while (people tend to do business with those who make it easy).
For example: If you base it on their clock, and the application needs their clock to be accurate for some reason, then its unlikely that the customer will change their clock.
I agree with Stephen but ultimately, I think that your contract is your best ally here.
As been previously mentioned, you don't want to inconvenience customers, especially if you have a large deployment.
As for SaaS, if I were a customer using your product and you said that the model is changing and we need to access the software from your server and ours must be decommissioned, I'd not be happy. I'd probably use the opportunity to switch packages.
In corporate settings, the contract really is the best way to handle these issues. I've worked on licensing issues for desktop and ASP.NET applications and they can cause a number of headaches for both you and your client.
However, if you insist on using something like this I suggest you go with a middle ground. Instead of only unlocking the application for a week or two, provide a license for 6 months or a year. This way, if you run into licensing issues (and you will run into issues) they only occur once a year rather than a couple of times per month. That will be cheaper for you in support and your clients will be less unhappy about dealing with licensing issues. If the company stops paying and you need to terminate the license you can handle that on a one-off basis, using contract enforcement as needed.
On the web service or client license options, I think a good license system would incorporate both. A client license to provide a the application a stable license and a web service to generate and deliver the license key when it is time for the application to be renewed. If the client won't allow the application to call home to get the license key also provide a manual entry method.
If you are going to store a license on the client, do not try to build a component yourself. There are many components available which will be much more robust and reliable than the one you build. There is a .NET .licx-based licensing method and a number of 3rd party methods that you can use. Which one is most appropriate depends on your scenario: how flexible you want the license and what other options you need. Most importantly, find something reliable - any time your customers spend fixing problems caused by licensing is non-productive for them and will reflect poorly on the application.
The important thing to keep in mind is that no system is fool proof. If your application is valuable, someone is going to figure out how to steal it. But at the corporate level and with custom software it's more likely the licensing will be used to remind people to pay rather than stop wholesale piracy.

Architecture for a machine database [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
This might be more of a serverfault.com question but a) it doesn't exist yet and b) I need more rep for when it does :~)
My employer has a few hundred servers (all *NIX) spread across several locations. As I suspect is common we don't really know how many servers we have: more than once I've been surprised to find a server that's been up for 5 years, apparently doing nothing but elevating the earth's temperature slightly. We have a number of databases that store bits of server information -- Puppet, Cobbler, Nagios, Cacti, our load balancers, DNS, various internal spreadsheets and so on but it's all very disparate, incomplete and overlapping. Maintaining this mess costs time and money.
So, I'd like to come up a single database which holds details of what each server is (hardware specs, role, etc) and replaces (or at least supplies data for) the databases mentioned above. The database and web interface are likely to be a Rails app as this is what I have most experience with. I'm more of a sysadmin than a coder.
Has this problem already been solved? I can't find any open source software that really fits the bill and I'm generally not too keen on bloaty, GUI vendor-supplied solutions.
How should I implement the device information collection bit? For instance, it'd be great to the database update device records when disks are added or removed, or when the server serial number changes because HP replace the board. This information comes from many different sources: dmidecode, command-line disk tools, SNMP against the server or its onboard lights-out card, and so on. I could expose all this through custom scripts and net-snmp, or I could run a local poller that reported the information back to the central DB (maybe via a RESTful interface or something). It must be easily extensible.
Have you done this? How? Tell me your experiences, discoveries, mistakes and recommendations!
This sounds like a great LDAP problem looking for a solution. LDAP is designed for this kind of thing: a catalog of items that is optimized for data searches and retrieval (but not necessarily writes). There are many LDAP servers to choose from (OpenLDAP, Sun's OpenDS, Microsoft Active Directory, just to name a few ...), and I've seen LDAP used to catalog servers before. LDAP is very standardized and a "database" of information that is usually searched or read, but not frequently updated, is the strong-suit of LDAP.
My team have been dumping all out systems in to RDF for a month or two now, we have the systems implementation people create the initial data in excel, which is then transformed to N3 (RDF) using Perl.
We view the data in Gruff (http://www.franz.com/downloads.lhtml) and keep the resulting RDF in Allegro (a triple store from the same guys that do Gruff)
It's incredibly simple and flexible - no schema means we simply augment the data on the fly and with a wide variety of RDF viewers and reasoning engines the presentation options are enless.
The best part for me? no coding, just create triples and throw them in the store then view them as graphs.
The collection of detailed machine information is a very frustrating problem (many vendors want to keep it this way). Even if you can spend a large amount of money, you probably will not find a simple solution to this problem. IBM and HP offer products that achieve what you are seeking, but they are very, very, expensive, and will leave a bad taste in your mouth once you realize that probably all you needed was 40-50% of the functionality they offer. You say that you need to monitor *Nix servers...most (if not all) unices support RFC 1514 (windows also supports this RFC as of windows 2000). The Host MIB support defined by RFC 1514 has its drawbacks however. Since it is SNMP based, it requires that SNMP be enabled on the machine, which is typically not the default for unix and windows machines. The reason for this is that SNMP was created before the entire world was using the Internet, and thus the old, crusty nature of its security is of concern. In many environs, this may not be acceptable for security reasons. However, if you are only dealing with machines behind the firewall, this might not be an issue (I suspect this is true in your case). Several years ago, I was working on a product that monitored hundreds of unix and windows machines. At the time, I did extensive research into the mechanics of how to acquire detailed information from each machine such as disk info, running processes, installed software, up-time, memory pressure, CPU and IO load (Including Network) without running a custom client on each machine. This info can be collected in a centralized fashion. As of three or four years ago, the RFC-1514 Host MIB spec was the only "standard" for acquiring detailed real-time machine info without resorting to OS-specific software. Sun and Microsoft announced a WebService based initiative many years ago to address some of this, but I suspect it never received any traction since I cannot at the moment even remember its marketing name.
I should mention that RFC 1514 is certainly no panacea. You are at the mercy of the OS-provided SNMP service, unless you have the luxury of deploying a custom info-collecting client to each machine. The RFC-1514 spec dictates that several parameters are optional, and if your target OS does not implement it, then you are back to custom code to provide the information.
I'm contemplating how to go about this myself, and I think this is one of the key pieces of infrastructure that not having around keeps us in the dark ages. Hopefully this will be a popular question on serverfault.com. :)
It's not just that you could install a single tool to collect this data, because that's not possible cheaply, but ideally you want everything from the hardware up to the applications on the network feeding into this thing.
I think the only approach that makes sense is a modular one. The range of devices and types of information is too disparate to come under a single tool. Also the collection of data needs to be as passive and asynchronous as possible - the reality of running infrastructure means that there will be interruptions and you can't rely on being able to get the data at all times.
I think the tools you've pointed out form something of an ecosystem that could work together - Cobbler can install from bare-metal and hand over to Puppet, which has support for generating Nagios configs, and storing configs in a database; for me only Cacti is a bit opaque in terms of programmatically inserting new devices, templates etc. but I know this is possible.
Ultimately you have to sit down and work out which pieces of information are important for the business you work for, and design a db schema around that. Then, work out how to get the information you need into the db, whether it's from Facter, Nagios, Cacti, or direct snmp calls.
Since you asked about collection of data, I think if you have quite disparate kit (Dell, HP etc.) then it makes sense to create a library to abstract away as much as possible the differences between them, so your scripts just make standard calls such as "checkdiskhealth". When you add new hardware you can add to the library rather than having to write a completely new script.
Sounds like a common problem that larger organizations would have. I know our (50 person company) sysadmin has a little access database of information about every server, license, and piece of hardware installed. He's very meticulous, but when it comes time to replace or repair hardware, he knows everything about it from his little db.
You and your organization could sponsor an open source project to get oyu what you need, and give back to the community so that additional features (that you may not need now) can be developed at no cost to you.
Maybe a simple web service? Just something that accepts a machine name or IP address. When the service gets input, it sticks it in a queue and kicks off a task to collect the data from the machine that notified it. The nature of the task (SNMP interrogation, remote call to a Perl script, whatever) could be stored as part of the machine information in the database. If the task fails, the machine ID stays in the queue and the machine is periodically re-polled until the information is collected. Of course, you also have to have some kind of monitor running on your servers to notice that something has changed and send the notification; hopefully this is easily accomplished with whatever server monitoring software you've already got in place.
There are some solutions from the big vendors for managing monstrous sets of machines - such as some of the Tivoli stuff from IBM. That is probably, however, overkill for mere hundreds of machines.
There are some free software server database solutions but I do not know if they provide hooks to update information automatically from the machines with dmidecode or SNMP. One I heard about (but no personal experience, sorry), is GLPI.
I believe you are looking for Zabbix. It's open source, easy to install and use.
I've installed for a client a few years ago, and if I remember right it has a client application that connects to the zabbix server to update it with the requested information.
I really recommend it: http://www.zabbix.com
Checkout Machdb Its an opensource solution to the problem you are describing.

Resources