Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
If let's say a cinema company wants to install online cinema ticketing system which allows users to book movies tickets online, may I know that what will be the best installation strategy?
Before this online system, the customers have to go to the cinemas and buy the tickets at the counters.
During the installation process, how to make sure once a seat is booked online, the counters at the cinema branches will also shows that the seat is being booked?
Here's the options of installation strategy:
a. Direct Installation
- Changing over from the old system to a new one by turning off the old system when the new system is turned on.
b. Parallel Installation
- running the old information system and the new one at the same time until management decides the old system can be turned off.
c. Single-location installation
- trying out an information system at one site and using the experience to decide if and how the new system should be deployed throughout the organization/
d. Phased Installation
- changing from the okd information system to the new one incrementally, starting with one or a few functional components and then gradually extending the installation to cover the whole new system.
According to your opinions, which method is the best in according to the case above? Thank you!
a. Direct Installation. You do not want the problem of merging data from two different systems. Also, you have all the time in the world to set up and test the new system while the old is still running. You should know very well how to use the new system once you go live. The key is proper testing.
The question, though, is a bit vague and I am not sure that it is truly a programming question that belongs on SO.
I'd say that Direct Installation wouldn't be the best way to switch, because people would still end up turning up at the Cinema wanting to buy tickets.
In my opinion, I think that Parallel Installation would be a good choice because it would allow people to get used to the new system and settle into using it before you stopped using the old one, but you could use b, c or d.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm solving a problem that seems most appropriately handled by a graph database, so I wanted to get a graph database server up and running, and go from there. I'm a Python developer, so I was trying to get something running with the bulbs library, which seems mature and effective, based on the documentation.
Unfortunately, I haven't been able to find any monolithic guide that covers everything between bulbs and an actual graph database server, and my attempts to cobble together working versions have been hampered by a number of compatibility problems.
I feel like I might be missing something intrinsic to the design of these systems. I'm used to postgresql, MariaDB, and other systems which are a pretty simple two-layer model, bridged by a standard API. It seems like the Apache Tinkerpop stack should be what I want, but Rexster seems like it's a server, but not a storage backend, so I still need one of those? I'm a little confused, because Neo4j and Titan seem like they're also servers, in addition to storage backends, so I don't know why Rexster is necessary. Right now, I'm trying to get Neo4j to work with bulbs, but the Gremlin plugin is missing... I've spent more than a day trying to piece this software stack together, and I'm getting really close to just giving up and building a million mapping tables in an ORM.
Is there a monolithic installation guide that I can follow somewhere, or has anyone had experience getting this working in a sane amount of time? I can use any solution deployable on Fedora, Debian, or OpenBSD.
Your question is too broad to provide a good answer. Briefly, I will say that you are not going down a good path. Bulbs is no longer developed. Rexster is TinkerPop 2.x which is a line of code that is no longer maintained. Please see the TinkerPop web site which as the full listing of current python related libraries for 3.x. However, before you even do that or worry about Titan or Neo4j, you should focus your time on learning the TinkerPop stack itself. Read the Getting Started tutorial. Get comfortable with the Gremlin Console. Play with GremlinBin a bit. Then get into the details of the reference documentation. If you start more slowly, you will likely have more success.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am in the process of designing a database for the first time outside of the classroom in order to make a future java application work with complete desired functionality. As I am trying to design entity relationship diagrams and tables, I find myself always thinking about my java project that is required later. I am beginning to wonder if this is making me more confused and if I am making this more difficult for myself; I am beginning to get nervous that I might not be skilled enough yet to pull this off.
Should I just focus on producing the most normalized database I can and trust that it will allow for my application to do everything it needs to do?
Or,
Should I definitely be keeping my future application in mind with each step of database development to ensure total functionality?
Edit: I would also appreciate any recommendations on free database design tools.
Databases are notoriously hard to refactor, so if you know about something you haven't gotten to yet but are definitely going to do, you need to consider that in your design. This is espcially true if the future something (For example reporting) is going to need to look at lots of records or is going to need moment in time data as opposed to doing calculations on the fly. This is the difference between storing the cost of an order vice calculating it based on current prices for instance. If you just look at the order process, you may thing it is ok to just calculate the price, but reporting will need to know what the price was at the time the order happened or the financial records will be messed up.
You might read this:
What are the general guidelines and best practices to keep in mind while designing database for an application?
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
So I am almost finished with my first app. It was constructed in Visual Studio/C#. I am now trying to determine which license to run with. I plan to provide the program free of charge to businesses and consumers, however, I do not want to publish the source code.
What is the best licensing format to go with? This application is kind of a teaser for a more powerful version so I would like to publish this one for free and hopefully make some money selling the 'power user' version.
I will be packaging Putty with this. I am about to go read their site as I may need to role my own SSH client.
I will be headed to my lawyer this week. Just want to get a bit of knowledge before I talk to her so I don't look like a dumb fool. Thanks in advance for your input!
Putty uses the MIT licence, so all you need to do is incorporate that licence along with your software. For your code you can use whatever licence you choose, no one will ask for your code unless you explicitly want to make it public.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Are there any standard or "best practice" ways of limiting feature functionality for a particular application?
Example: We have a product with a variety of features, and our customers can pick and choose which features they would like to use, and the cost of the product varies based on which features they are actually using.
In the past, we have distributed along with our software installer an encrypted license file that contains information about the customer, as well as the collection of features that they have enabled. In code, we read from the license file and enable the functionality according to the license file.
This seems to work fine, except there a few disadvantages:
Upgrading users with new functionality can be sort of a pain
If a particular feature shows up in multiple places throughout the application, a developer might not realize that this feature should be licensed, and forget to check the license file before granting functionality to the user
If the license file becomes corrupted, deleted, moved, renamed, etc. the application will not run
We're getting ready to roll out a new set of features, and I was just curious what others in the community have done to tackle this problem?
Why not break down the product into modules like Matlab? Then charge for each module. The licensing can be kept online and the end user just needs to download the module to enable the feature.
There are usually 3 common approaches to this:
using fixed program versions (each version just adds features, you can't customize which features you want or not). You can use also "subversion", like basic and pro edition for Software x.0. Windows uses this approach.
Having modules of functionality, which are a product as themselves. Matlab uses this approach.
Having a software with basic functionality, and then having plugins, or extra apps for sale. Eclipse uses this approach (though it's free)
You can mix those approaches also for a better customizability.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I need to create a software license key, and one of the requirement is to bound the key to a particular server, to avoid image duplication.
1. what is the preferred way to achieve this task (CPU, MAC, other) ?
2. Can it be achieved on virtual machines?
Extra credits for to cross-platform approaches :)
Follow the same approach as the Windows activation does. Collect the information about hardware, convert it into some sort of hash and here is your machine key.
Check here what hardware it watches:
Windows Product Activation
Windows Product Activation (WPA) on Windows XP
Please don't do anything like this: all you are doing is pissing off your legitimate customers. The bad guys will find a way around whatever you do.
The last thing any customer wants is to be up at 4:00am trying to convince a piece of software they paid good money for that, yes, it is still running on the same machine as before only the network card/ disk controller/ motherboard/ etc. has been changed.
The tricky thing is to find a unique key determined by the above, with some reduncancy. I.e. to allow that either CPU(s), MAC or harddisk is replaced, but not all of them. Actually, CPU-ID is less likely to change as MAC and harddisk, so it is more suitable. It can be acchieved on virtual machines, although virtual machines can also clone these IDs. In that case you may want to combine an active license with a single internet based server which validates the activity; this way if VM's are cloned, only one can be active.