Java access to windows share with JCIFS vs Linux mount? - apache-camel

I'm setting up an integration for transfering files to and from a windows fileshare using Apache Camel.
For this I could use either https://camel.apache.org/jcifs.html or a plain file endpoint on a local samba mount (will be running on RHEL)
I would expect JCIFS to maybe incur a small performance cost, but that it would improve my control from the integration-setup, of transparent configuration and handling.
On the other hand, development on JCFIS seems to end in 2011.
Anyone with experience with JCFIS in this kind of setup? Or just insight into pros and cons compared to smbmount ?

Major effort is made to make windows file shares mountable on *nix through the samba driver by RedHat developers among others. There where a new release a few days ago. Can't say the same about the subproject JCIFS - as you point out yourself - hasn't changed for many years.
If you ask for stability and security, this may be important.
That said, JCIFS works in many scenarios I've tested and shouldn't be ruled out for simple scenarios where portability is more important than stability, but I've heard stories about authentication problems using different JDK versions among other things.

Related

Revisiting MS Access as Enterprise Software

It's been 10 years since this question was asked and answered here and I'd like to see what current thoughts are.
We have a third party app that we've supported for at least that long. It's an Access runtime application that connects to SQL Server and contains highly confidential data.
Some years ago we moved the database to an SQL Server running on Server Core. More recently we've been asked to run the first upgrade of the database schema in 6 years. The vendor provided upgrade package appears to be built using VB6 and won't run on the server. It also doesn't support running the updates remotely. We have a couple of ways that we can get it done but it has presented me with an opportunity to finally move on from what I think is not an enterprise product.
As part of that I've been asked why I think this product is so bad and, in my estimation, antiquated. My immediate internal response is that it's not a real application, it's Access. That's compounded by the fact that we're paying a pretty good bit for it and I think that there are better, more robust solutions now available that are also cheaper (I think in the end that's all that should matter).
That said I acknowledge that there my be some bias in my opinions on this particular app. Looking back at that old post a few things stand out.
I think there's a big difference between internally developed applications built this way and paid for solutions. Supporting an internally developed app written in Access may still have some positives. I don't think the positives pointed out in the top answer hold up when you're paying someone for it. The disadvantages are precisely what we're running in to.
Reporting isn't being done in Access. It's now mostly being done with outside tools. Most users want to see web based reporting.
A couple of the responses mentioned professional Access developers or this type of application being the COBOL of the 21st century. I think that's an apt description. I'm not sure professional Access developers still exist. How long should we try to maintain this and how long do we think the vendor will be able to?
I think the main mistake about Access is to consider it as a tool made for amateurs to develop applications. It can work this way, but keep in mind that amateur development will give you amateur applications, while professional development will give you professional results
Maybe this is the crux of my problem in particular. I'm not convinced that our application is 'professional'. It feels semi-pro if I'm generous. The VB6 updater is one clue and there are other components that have given me cause for concern over the years.
Fair or not, in my mind, most, if not all Access applications in the enterprise have these same issues. At the end of the day, the question is whether it serves the needs of the department using it.
Where does Access fit in the enterprise in 2019?

How to connect mainframe data to ClearCase?

I joined a new company using Mainframes (z/os) for developement, documentation, design and testing. Now we are planning to migrate the whole system to ClearCase for version control.
My question is: are there any connectors I can use to connect the mainframe to ClearCase, so that the user can checkin/checkout straight in to their mainframe environment?
I am looking for a couple of solutions to do this, like tools I can use and so on.
If you're talking about storing the stuff in Clearcase on a distributed platform and having it accessible from z/OS, I'm not aware of any way to do that but, since z/OS comes with a full blown UNIX environment built in with access to both the UNIX file systems and z/OS data sets, it may well be doable.
What is doable, since we've done it (and, in fact, we developed large chunks of it) is to access data on the mainframe from a distributed platform.
RD/z (Rational Developer for System z, although it may have changed names by now) has a plug-in which allows Eclipse to tie in to the mainframe quite easily. It uses a started task to communicate with SCLM as its primary library manager but, from memory, you can directly access the non-SCLM data sets as well.
If you combined that with a Clearcase Eclipse plug-in, you could quite easily put together a system which, while it kept everything in a Clearcase repository, could quite easily push it up the the mainframe and build it automatically, pulling down the built artefacts if you also wanted them stored in Clearcase.
All from one environment. Of course, the dinosaurs will question your sanity at storing their 'precious' in a flaky distributed system - that fact that both products come out of IBM in no way alleviates the rivalry between mainframers and lesser mortals :-)
If that's not ideal and the company has a mainframe (and it appears they do), they will also have a support contract with IBM. Best bet would be to raise a Q&A with IBM to see if there are any solutions already out there - please don't raise a PMR for this, it's not a bug and that just annoys support/development and makes the monthly targets harder to meet :-) Alternatively, ask the people that would know. IBM developers monitor these forums and should be able to help you out more than I.
On the off-chance that you don't have a developerWorks account, keep an eye on this question that I started on your behalf. I'm actually interested in the results myself, since part of my day job is the marrying of the mainframe and distributed worlds. Although, unfortunately, it sometimes seems like it's about as successful as most Hollywood or other celebrity marriages :-)

Architecture for a machine database [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
This might be more of a serverfault.com question but a) it doesn't exist yet and b) I need more rep for when it does :~)
My employer has a few hundred servers (all *NIX) spread across several locations. As I suspect is common we don't really know how many servers we have: more than once I've been surprised to find a server that's been up for 5 years, apparently doing nothing but elevating the earth's temperature slightly. We have a number of databases that store bits of server information -- Puppet, Cobbler, Nagios, Cacti, our load balancers, DNS, various internal spreadsheets and so on but it's all very disparate, incomplete and overlapping. Maintaining this mess costs time and money.
So, I'd like to come up a single database which holds details of what each server is (hardware specs, role, etc) and replaces (or at least supplies data for) the databases mentioned above. The database and web interface are likely to be a Rails app as this is what I have most experience with. I'm more of a sysadmin than a coder.
Has this problem already been solved? I can't find any open source software that really fits the bill and I'm generally not too keen on bloaty, GUI vendor-supplied solutions.
How should I implement the device information collection bit? For instance, it'd be great to the database update device records when disks are added or removed, or when the server serial number changes because HP replace the board. This information comes from many different sources: dmidecode, command-line disk tools, SNMP against the server or its onboard lights-out card, and so on. I could expose all this through custom scripts and net-snmp, or I could run a local poller that reported the information back to the central DB (maybe via a RESTful interface or something). It must be easily extensible.
Have you done this? How? Tell me your experiences, discoveries, mistakes and recommendations!
This sounds like a great LDAP problem looking for a solution. LDAP is designed for this kind of thing: a catalog of items that is optimized for data searches and retrieval (but not necessarily writes). There are many LDAP servers to choose from (OpenLDAP, Sun's OpenDS, Microsoft Active Directory, just to name a few ...), and I've seen LDAP used to catalog servers before. LDAP is very standardized and a "database" of information that is usually searched or read, but not frequently updated, is the strong-suit of LDAP.
My team have been dumping all out systems in to RDF for a month or two now, we have the systems implementation people create the initial data in excel, which is then transformed to N3 (RDF) using Perl.
We view the data in Gruff (http://www.franz.com/downloads.lhtml) and keep the resulting RDF in Allegro (a triple store from the same guys that do Gruff)
It's incredibly simple and flexible - no schema means we simply augment the data on the fly and with a wide variety of RDF viewers and reasoning engines the presentation options are enless.
The best part for me? no coding, just create triples and throw them in the store then view them as graphs.
The collection of detailed machine information is a very frustrating problem (many vendors want to keep it this way). Even if you can spend a large amount of money, you probably will not find a simple solution to this problem. IBM and HP offer products that achieve what you are seeking, but they are very, very, expensive, and will leave a bad taste in your mouth once you realize that probably all you needed was 40-50% of the functionality they offer. You say that you need to monitor *Nix servers...most (if not all) unices support RFC 1514 (windows also supports this RFC as of windows 2000). The Host MIB support defined by RFC 1514 has its drawbacks however. Since it is SNMP based, it requires that SNMP be enabled on the machine, which is typically not the default for unix and windows machines. The reason for this is that SNMP was created before the entire world was using the Internet, and thus the old, crusty nature of its security is of concern. In many environs, this may not be acceptable for security reasons. However, if you are only dealing with machines behind the firewall, this might not be an issue (I suspect this is true in your case). Several years ago, I was working on a product that monitored hundreds of unix and windows machines. At the time, I did extensive research into the mechanics of how to acquire detailed information from each machine such as disk info, running processes, installed software, up-time, memory pressure, CPU and IO load (Including Network) without running a custom client on each machine. This info can be collected in a centralized fashion. As of three or four years ago, the RFC-1514 Host MIB spec was the only "standard" for acquiring detailed real-time machine info without resorting to OS-specific software. Sun and Microsoft announced a WebService based initiative many years ago to address some of this, but I suspect it never received any traction since I cannot at the moment even remember its marketing name.
I should mention that RFC 1514 is certainly no panacea. You are at the mercy of the OS-provided SNMP service, unless you have the luxury of deploying a custom info-collecting client to each machine. The RFC-1514 spec dictates that several parameters are optional, and if your target OS does not implement it, then you are back to custom code to provide the information.
I'm contemplating how to go about this myself, and I think this is one of the key pieces of infrastructure that not having around keeps us in the dark ages. Hopefully this will be a popular question on serverfault.com. :)
It's not just that you could install a single tool to collect this data, because that's not possible cheaply, but ideally you want everything from the hardware up to the applications on the network feeding into this thing.
I think the only approach that makes sense is a modular one. The range of devices and types of information is too disparate to come under a single tool. Also the collection of data needs to be as passive and asynchronous as possible - the reality of running infrastructure means that there will be interruptions and you can't rely on being able to get the data at all times.
I think the tools you've pointed out form something of an ecosystem that could work together - Cobbler can install from bare-metal and hand over to Puppet, which has support for generating Nagios configs, and storing configs in a database; for me only Cacti is a bit opaque in terms of programmatically inserting new devices, templates etc. but I know this is possible.
Ultimately you have to sit down and work out which pieces of information are important for the business you work for, and design a db schema around that. Then, work out how to get the information you need into the db, whether it's from Facter, Nagios, Cacti, or direct snmp calls.
Since you asked about collection of data, I think if you have quite disparate kit (Dell, HP etc.) then it makes sense to create a library to abstract away as much as possible the differences between them, so your scripts just make standard calls such as "checkdiskhealth". When you add new hardware you can add to the library rather than having to write a completely new script.
Sounds like a common problem that larger organizations would have. I know our (50 person company) sysadmin has a little access database of information about every server, license, and piece of hardware installed. He's very meticulous, but when it comes time to replace or repair hardware, he knows everything about it from his little db.
You and your organization could sponsor an open source project to get oyu what you need, and give back to the community so that additional features (that you may not need now) can be developed at no cost to you.
Maybe a simple web service? Just something that accepts a machine name or IP address. When the service gets input, it sticks it in a queue and kicks off a task to collect the data from the machine that notified it. The nature of the task (SNMP interrogation, remote call to a Perl script, whatever) could be stored as part of the machine information in the database. If the task fails, the machine ID stays in the queue and the machine is periodically re-polled until the information is collected. Of course, you also have to have some kind of monitor running on your servers to notice that something has changed and send the notification; hopefully this is easily accomplished with whatever server monitoring software you've already got in place.
There are some solutions from the big vendors for managing monstrous sets of machines - such as some of the Tivoli stuff from IBM. That is probably, however, overkill for mere hundreds of machines.
There are some free software server database solutions but I do not know if they provide hooks to update information automatically from the machines with dmidecode or SNMP. One I heard about (but no personal experience, sorry), is GLPI.
I believe you are looking for Zabbix. It's open source, easy to install and use.
I've installed for a client a few years ago, and if I remember right it has a client application that connects to the zabbix server to update it with the requested information.
I really recommend it: http://www.zabbix.com
Checkout Machdb Its an opensource solution to the problem you are describing.

Does anyone have database, programming language/framework suggestions for a GUI point of sale system?

Our company has a point of sale system with many extras, such as ordering and receiving functionality, sales and order history etc. Our main issue is that the system was not designed properly from the ground up, so it takes too long to make fixes and handle requests from our customers. Also, the current technology we are using (Progress database, Progress 4GL for the language) incurs quite a bit of licensing expenses on our customers due to mutli-user license fees for database connections etc.
After a lot of discussion it is looking like we will probably start over from scratch (while maintaining the current product at least for the time being). We are looking for a couple of things:
Create the system with a nice GUI front end (it is currently CHUI and the application was not built in a way that allows us to redesign the front end... no layering or separation of business logic and gui...shudder).
Create the system with the ability to modularize different functionality so the product doesn't have to include all features. This would keep the cost down for our current customers that want basic functionality and a lower price tag. The bells and whistles would be available for those that would want them.
Use proper design patterns to make the product easy to add or change any part at any time (i.e. change the database or change the front end without needing to rewrite the application or most of it). This is a problem today because the Progress 4GL code is directly compiled against the database. Small changes in the database requires lots of code recompiling.
Our new system will be Linux based, with a possibility of a client application providing functionality from one or more windows boxes.
So what I'm looking for is any suggestions on which database and/or framework or programming language(s) someone might recommend for this sort of product. Anyone that has experience in this field might be able to point us in the right direction or even have some ideas of what to avoid. We have considered .NET and SQL Express (we don't need an enterprise level DB), but that would limit us to windows (as far as I know anyway). I have heard of Mono for writing .NET code in a Linux environment, but I don't know much about it yet. We've also considered a Java and MySql based implementation.
To summarize we are looking to do the following:
Keep licensing costs down on the technology we will use to develop the product (Oracle, yikes! MySQL, nice.)
Deliver a solution that is easily maintainable and supportable.
A solution that has a component capable of running on "old" hardware through a CHUI front end. (some of our customers have 40+ terminals which would be a ton of cash in order to convert over to a PC).
Suggestions would be appreciated.
Thanks
[UPDATE]
I should note that we are currently performing a total cost analysis. This question is intended to give us a couple of "educated" options to look into to include in or analysis. Anyone who could share experiences/suggestions about client/server setups would be appreciated (not just those who have experience with point of sale systems... that would just be a bonus).
[UPDATE]
For anyone who is interested, we ended up going with Microsoft Dynamics NAV, LS Retail (a plugin for the point of sale and various other things) and then did some (and are currently working on) customization work on top of that. This setup gave us the added benefit of having a fully integrated g/l system, which our current system lacked.
Java for language (or Scala if you want to be "bleeding edge", depending on how you plan to support it and what your developers are like it might be better, but also worse)
H2 for database
Swing for GUI
Reason: Free, portable and pretty standard.
Update: Missed the part where the system should be a client-server setup. My assumption was that the database and client should run on the same machine.
I suggest you first research your constraints a bit more - you made a passing reference to a client using a particular type of terminal - this may limit your options, unless the client agrees to upgrade.
You need to do a lot more legwork on this. It's great to get opinions from web forums, but we can't possibly know your environment as well as you do.
My broad strokes advice would be to aim for technology that is widely used. This way, expertise on the platform is cheaper than "niche" technologies, and it will be easier to get help if you hit a brick wall. Of course, following this advice may not be possible if you have non-negotiable technology already in place at customers.
My second suggestion would be to complete a full project plan, with detailed specs and proper cost estimates, before going with the "rewrite from scratch" option. Right now, you're saying that it would be cheaper to rewrite the system than maintain it, and you don't really know how much it would cost to re-write.
I suggest you use browser for the UI.
Organize your application as a web application.
There are tons of options for the back-end. You can use Java + MySQL. Java backend will save you from windows/linux debate as it will run on both platforms. You won't have any licensing cost for both Java and MySQL. (Edit: Definitely there are a lot of others languages that have run-times for both linux & windows including PHP, Ruby, Python etc)
If you go this route, you may also want to consider Google Web Toolkit (GWT) for creating the browser based front-end in a modular fashion.
One word of caution though. Browsers can be pesky when it comes to memory management. In our experience, this was the most significant challenge in doing browser based POS You may want to checkout Adobe Flex that runs in browser but might be more civil in its memory management.
What is CHUI? Character-UI, as in VT terminals? Or even 3270 style?
It sounds like you need a 3-tier system - the database backend, a middle-layer that runs the bulk of the back-end business processes, and a front-end layer for the CHUI / GUI / data-gateway.
All three layers can reside on one machine; or you can distribute the tiers out to various servers. The front-end layer would control the actual terminals, whether they are VT-terminals, or a web-browser, or a custom-written 'client' application.
Make sure you have considered the hardware needs here -- are you going to have barcode scanners, cash drawers, POS debit/credit terminals, et cetra? If you are using a standard browser, it might be hard to reliably integrate those items. (At the very least, you're likely going to have to write special applets to handle them.)
Finally, consider the possibility of a thin-client technology on Windows. It greatly simplifies system management, since you only have to upgrade the software centrally. Thin-client PC's are cheap -- sub $200.
Golden Code Development (see www.goldencode.com) has a technology that does automated conversion of Progress 4GL (the schema and code... the entire application) to a Java application with a relational database backend (e.g. PostgreSQL). They currently support a very complete CHUI environment and they do refactor the code. For example, the conversion separates the UI, the data model and the business logic into separate Java classes. The entire result is a drop-in replacement that is compatible with the original (users don't need retraining, processes don't need to be modified, the data is migrated too). This is possible because they provide an application server and a set of runtime classes that provide that compatibility. The result of the automated conversion is not something that needs further editing before you can compile and run it. True terminal support is included so hardware terminals still work (it requires a small JNI library to access NCURSES from Java). All the rest of the code in the runtime is pure Java. No Progress Software Corp technology is used in the resulting system and it runs on Linux.
At least one converted system is already in production, running a 24 by 7 mission critical environment. It is a converted ERP system that their mid-sized pilot customer uses to run their entire business.

Shared File Database Suggestion

I would like to build and deploy a database application for Windows based systems, but need to live within the following constraints:
Cannot run as a server (i.e., have open ports);
Must be able to share database files with other instances of the program (running on other machines);
Must not require a DBA for maintenance;
No additional cost for run-time license.
In addition, the following are nice to have "features":
Zero-install (e.g., no registry entries, no need to put files in \Windows\..., etc.);
"Reasonable" performance (yes, that's vague);
"Reasonable" file size limitations (at least 1GB per table/file--just in case).
I've seen this question
Embedded Database for .net that can run off a network
but it doesn't quite answer it for me.
I have seen the VistaDB site, but while it looks promising, I have no personal experience with it.
I have also looked at SQLite, and while it seems good enough for Goggle, I (again) have no personal experience with it.
I would love to use a Java based solution because it's cross-platform (even though my main target is Windows, I'd like to be flexible) and WebStart is a really nice way to distribute software, but the most commonly used DBs (Derby and hsqldb) won't support shared access.
I know that I'm not the only one who's trying/tried to do this, so I'm hoping I could get some advice.
I'd go with SQLite. There are SQLite bindings for everything, and it's very widely used as a embedded database for a large number of applications.
I use SQLite at work and one thing that you should keep in mind is that its file based and uses a file lock for managing concurrent connections. It is not a great solution when you have multiple users trying to use the database at the same time. SQLite is however a great database for one user application, its fast, has a small foot print and has a thriving community built around it.
If you've got VStudio sitting around, how about SQL Server 3.5 Compact edition? MSSQL running in-proc.
http://www.microsoft.com/sql/editions/compact/downloads.mspx

Resources