I have a problem statement.
A client of mine has three years of data in very complicated Excel Sheets. They are currently doing data entry, reporting and every thing basically in excel and these excels are shared with each other when ever there is a need to share the data.
They want eventually to move to a web based solution.
The first step of the solution agreed is to make a routine and a database. The routine will import the data from excels into the database. The database itself will need to be designed in such a way that the same database can be used at the back for a web site.
The said routine will be used as a one time activity to import all the data for the last three years and also will be used on periodic basis so that they keep importing the data until the web front is ready or agreed upon.
The defacto method for me is to analyze the excel sheets, make a normalized database, make a service in java that will use poi, read the excel sheets, apply business logic and insert the data into the database. The problem here is that any time they change the excel or even the data format, the routine will need to be rewritten.
I am willing to go any way, maybe use an ETL tool.
Please recommend.
Related
I'm one of several analyst and data engineers working within a Snowflake database. We all often have to write ad-hoc small bits of code to check tables and views. These are often quite repetitive tasks (e.g. filtering data based on a certain reference, joining FACTS and DIM tables to add context).
I'd like to create a Worksheet that we can all periodically add useful bits of code to. Just to save us time making joins or be a good starting place for writing longer bits of code.
I've previously used SQL Server where I was able to save template files with useful bits of code. I could open these files directly within SQL Server so it was really easy to open, edit and run these files. What are some similar features in Snowflake?
Thanks
Yes, but you must use the new web UI "snowsight". In the new UI you can share worksheets with team members and give them edit access. You can also import your worksheets from the classic UI as well.
Keep in mind that team editing doesn't work like google docs, where you can see each other typing. So be mindful of that. However, every version of the worksheet is automatically saved and available from the upper right of the interface.
https://docs.snowflake.com/en/user-guide/ui-snowsight-gs.html
https://docs.snowflake.com/en/user-guide/ui-snowsight-worksheets-gs.html
I'm working on an Excel 2010 VSTO solution (doing code-behind for an Excel workbook in Visual Studio 2010) and am required to interact with a centralised SQL Server 2008 R2 data source for both read and write operations.
The database will contain up to 15,000 rows in the primary table plus related rows. Ideally, the spreadsheets will be populated from the database, used asynchronously, and then uploaded to update the database. I'm concerned about performance around the volume of data.
The spreadsheet would be made available for download via a Web Portal.
I have considered two solutions so far:
A WCF Service acting as a data access layer, which the workbook queries in-line to populate its tables with the entire requisite data set. Changes would be posted to the WCF service once updates are triggered from within the workbook itself.
Pre-loading the workbook with its own data in hidden worksheets on download from the web portal. Changes would be saved by uploading the modified workbook via the web portal.
I'm not too fussed about optimisation until we have our core functionality working, but I don't want to close myself off to any good options in terms of architecture down the track.
I'd like to avoid a scenario where we have to selectively work with small subsets of data at a time to avoid slowdowns -> integrating that kind of behaviour into a spreadsheet sounds like needless complexity.
Perhaps somebody with some more experience in this area can recommend an approach that won't shoot us in the foot?
Having done similar:
Make sure that all of your data access code runs on a background thread. Excel functions and addins (mostly) operate on the UI thread, and you want your UI responsive. Marshalling is non-trivial (in Excel '03 it required some pinvoke, may have changed in '10), but possible.
Make your interop operations as chunky as possible. Each interop call has significant overhead, so if you are formatting programatically, format as many cells as possible at once (using ranges). This advice also applies to data changes- you only want diffs updating the UI, which means keeping a copy of your dataset in memory to detect diffs. If lots of UI updates come in at once, you may want to insert some artificial pauses (throttling) to let the UI thread show progress as you are updating.
You do want a middle-tier application to talk to SQL Server and the various excel instances. This allows you to do good things later on, like caching, load balancing, hot failover, etc. This can be a WCF service as you suggested, or a Windows Service.
If you spreadsheet file has to be downloaded from server you can use EPPlus on server to generate spread sheet, it will be much faster than VSTO, than you can use WCF from addin in excel app to upload the data. Reading data using range will take much less time than writing if you dont have any formula in your sheet. Also in WCF you can use batch to update 15000 rows it should take aprroximately 2 min-5min for enire operation
This is linked to my other question when to move from a spreadsheet to RDBMS
Having decided to move to an RDBMS from an excel book, here is what I propose to do.
The existing data is loosely structured across two sheets in a work-book. The first sheet contains main record. The second sheet allows additional data.
My target DBMS is mysql, but I'm open to suggestions.
Define RDBMS schema
Define, say, web-services to interface with the database so the same can be used for both, UI and migration.
Define a migration script to
Read each group of affiliated rows from the spreadsheet
Apply validation/constraints
Write to RDBMS using the web-service
Define macros/functions/modules in spreadsheet to enforce validation where possible. This will allow use of the existing system while the new comes up. At the same time, ( i hope ) it will reduce migration failures when the move is eventually made.
What strategy would you follow?
There are two aspects to this question.
Data migration
Your first step will be to "Define RDBMS schema" but how far are you going to go with it? Spreadsheets are notoriously un-normalized and so have lots of duplication. You say in your other question that "Data is loosely structured, and there are no explicit constraints." If you want to transform that into a rigourously-defined schema (at least 3NF) then you are going to have to do some cleansing. SQL is the best tool for data manipulation.
I suggest you build two staging tables, one for each worksheet. Define the columns as loosely as possible (big strings basically) so that it is easy to load the spreadsheets' data. Once you have the data loaded into the staging tables you can run queries to assess the data quality:
how many duplicate primary keys?
how many different data formats?
what are the look-up codes?
do all the rows in the second worksheet have parent records in the first?
how consistent are code formats, data types, etc?
and so on.
These investigations will give you a good basis for writing the SQL with which you can populate your actual schema.
Or it might be that the data is so hopeless that you decide to stick with just the two tables. I think that is an unlikely outcome (most applications have some underlying structure, we just have to dig deep enough).
Data Loading
Your best bet is to export the spreadsheets to CSV format. Excel has a wizard to do this. Use it (rather than doing Save As...). If the spreadsheets contain any free text at all the chances are you will have sentences which contain commas, so make sure you choose a really safe separator, such as ^^~
Most RDBMS tools have a facility to import data from CSV files. Postgresql and Mysql are the obvious options for an NGO (I presume cost is a consideration) but both SQL Server and Oracle come in free (if restricted) Express editions. SQL Server obviously has the best integration with Excel. Oracle has a nifty feature called external tables which allow us to define a table where the data is held in a CSV file, removing the need for staging tables.
One other thing to consider is Google App Engine. This uses Big Table rather than an RDBMS but that might be more suited to your loosely-structured data. I suggest it because you mentioned Google Docs as an alternative solution. GAE is an attractive option because it is free (more or less, they start charging if usage exceeds some very generous thresholds) and it would solve the app sharing issue with those other NGOs. Obviously your organisation may have some qualms about Google hosting their data. It depends on what field they are operating in, and the sensitivity of the information.
Obviously, you need to create a target DB and the necessary table structure.
I would skip the web services and write a groovy script which reads the .xls (using the POI library), validates and saves the data in the database.
In my view, anything more involved (web services, GUI...) is not justified: these kinds of tasks are very well suited for scripts because they're concise and extremely flexible while things like performance, code base scalability and such are less of an issue here. Once you have something that works, you will be able to adapt the script to any future document with different data anomalies you run into in a matter of minutes or a few hours.
This is all assuming your data isn't in perfect order and needs to be filtered and/or cleaned.
Alternatively, if the data and validation rules aren't too complex, you can probably get good results with using a visual data transfer tool like Kettle: you just define the .xls as your source, the database table as the table, some validation/filter rules if needed and trigger the loading process. Quite painless.
If you'd rather use a tool that roll your own, check out SeekWell, which lets you write to your database from Google Sheets. Once you define your schema, Select the tables into a Sheet, then edit or insert the records and mark them for the appropriate action (e.g., update, insert, etc.). Set the schedule for the update and you're done. Read more about it here. Disclaimer--I'm a co-founder.
Hope that helps!
You might be doing more work than you need to. Excel spreadsheets can be saved as CVS or XML files and many RDBMS clients support importing these files directly into tables.
This could allow you skip writing web service wrappers and migration scripts. Your database constraints would still be properly enforced during any import. If your RDBMS data model or schema is very different from your Excel spreadsheets, however, then some translation would of course have to take place via scripts or XSLT.
Hopefully someone has been down this road before and can offer some sound advice as far as which direction I should take. I am currently involved in a project in which we will be utilizing a custom database to store data extracted from excel files based on pre-established templates (to maintain consistency). We currently have a process (written in C#.Net 2008) that can extract the necessary data from the spreadsheets and import it into our custom database. What I am primarily interested in is figuring out the best method for integrating that process with our portal. What I would like to do is let SharePoint keep track of the metadata about the spreadsheet itself and let the custom database keep track of the data contained within the spreadsheet. So, one thing I need is a way to link spreadsheets from SharePoint to the custom database and vice versa. As these spreadsheets will be updated periodically, I need tried and true way of ensuring that the data remains synchronized between SharePoint and the custom database. I am also interested in finding out how to use the data from the custom database to create reports within the SharePoint portal. Any and all information will be greatly appreciated.
I have actually written a similar system in SharePoint for a large Financial institution as well.
The way we approached it was to have an event receiver on the Document library. Whenever a file was uploaded or updated the event receiver was triggered and we parsed through the data using Aspose.Cells.
The key to matching data in the excel sheet with the data in the database was a small header in a hidden sheet that contained information about the reporting period and data type. You could also use the SharePoint Item's unique ID as a key or the file's full path. It all depends a bit on how the system will be used and your exact requirements.
I think this might be awkward. The Business Data Catalog (BDC) functionality will enable you to tightly integrate with your database, but simultaneously trying to remain perpetually in sync with a separate spreadsheet might be tricky. I guess you could do it by catching the update events for the document library that handles the spreadsheets themselves and subsequently pushing the right info into your database. If you're going to do that, though, it's not clear to me why you can't choose just one or the other:
Spreadsheets in a document library, or
BDC integration with your database
If you go with #1, then you still have the ability to search within the documents themselves and updating them is painless. If you go with #2, you don't have to worry about sync'ing with an actual sheet after the initial load, and you could (for example) create forms as needed to allow people to modify the data.
Also, depending on your use case, you might benefit from the MOSS server-side Excel services. I think the "right" decision here might require more information about how you and your team expect to interact with these sheets and this data after it's initially uploaded into your SharePoint world.
So... I'm going to assume that you are leveraging Excel because it is an easy way to define, build, and test the math required. Your spreadsheet has a set of input data elements, a bunch of math, and then there are some output elements. Have you considered using Excel Services? In this scenario you would avoid running a batch process to generate your output elements. Instead, you can call Excel services directly in SharePoint and run through your calculations. More information: available online.
You can also surface information in SharePoint directly from the spreadsheet. For example, if you have a graph in the spreadsheet, you can link to that graph and expose it. When the data changes, so does the graph.
There are also some High Performance Computing (HPC) Excel options coming out from Microsoft in the near future. If your spreadsheet is really, really big then the Excel Services route might not work. There is some information available online (search for HPC excel - I can't post the link).
Is it possible to use Spreadsheet has a database to store data...I don't want to use any database externally, I want to use Python, Google Apps and spreadsheet only.
For Example: using Python and Google Apps I have developed leave application form, on submit of that form I have to store that data in to spreadsheet instead of using any database (MySql or Oracle)
If its possible give me some reference code
Thanks in advance
Whilst storing data in a spreadsheet might seem like a good idea at first, as your application gets more complicated you may come to regret it. If your data structures become more complicated - especially if you need relationships between tables - you'll be much better off with a database.
My advice would be to use a database and then use The xlwt module to convert your data into Spreadsheet format.
I would recommend you store this data internally in a flat file or database and transform the data to the spreadsheet. Doing it this way will let you change with the future. If you use google apps spreadsheet, and the api changes even slightly, you could lose data. If you store in an xls spreadsheet, you might as well store in a flat file and export the data as that will be easier than reading/writing to a spreadsheet.
Why do you want to do it this way?