I have been asked to pull in columns for use in a web app.I am using asp.net and C#. I was using a dataReader to populate the class variables. The problem is that the dbf file can change. Sometimes rows are added or deleted so my class would have to change every time the data source file changes to represent the columns Is there a way around this?
Lots of ways of addressing this issue, your problem is handled by a whole class of solutions known as Object Relational Mapping or ORM. The absolute king of these in the Java and .Net world is NHibernate. This does nean a rebuild with each DB change though, I use code generation to solve that problem, builds the class and mapping files direct from the DB. Then you get into TDD and CI, to make sure you haven't broken anything and then .....
However, if you want something quick and dirty you could create a dictionary within your classes and store any extra columns in there. Completely flexible but your classes extra columns aren't defined within the class itself.
I just used a few try/catch blocks to solve this.
Related
The firm I work in has a lot of data sources entering the firm database using the Informatica ETL tool, stored in maplets and other data models (sorry If I'm not using the exact terminology).
The problem is that all the business logic is stored in the 'graphical interface' and nowhere else - Every time I want to see what field goes into the target field I have to trace the inputs through the maplet and that takes a very long time.
The Question is: Is there a tool that can takes all the relationships in the Informatica maplet and somehow export them to a excel table (so I can see it all without tracing)? that way I could try to make proper documentation....
Thanks in Advance.
It's possible to export mappings or whole workflows to XML. Next, you can use this tool - it will create tables with source to target dependency for every mapping.
Keep in mind it will only map input to output, it won't extract the full logic and transformations done along the way - that would've been to complex for simple visualization.
Informatica supports exporting mapping information to Excel - just search the documentation which tells you how to do it.
However, for anything other than the simplest of mappings, what ends up in Excel is not that easy to understand. If your Informatica installation supports it, then using the lineage capabilities is a much better bet.
So, I have a project where I get data from a dozen different sources, some are database objects, most often the data is in different JSON formats, or often XML formats. So, I need to take this disparate data and pull it into one single clean managed object that we control.
I have seen dozens of different posts on various tools to do object to object mapping. Orika being one of them, etc. But the problem is that Orika, like many of these still need solid classes defined to do the mapping. If there is a change to the mapping, then I have to change my class, re-commit it, then do a build and deploy new code ... BTW, testing would also have to be done like any code change. So, maybe some of these tools aren't a great solution for me.
Then I was looking to do some sort of database-driven mapping, where I have a source, a field, and then the new field or function I would like to take it to. So, with a database-driven tool, I could modify the fields in the database, and everything would keep working as it should. I could always create a front-end to modify this tool.
So, with that ... I am asking if there is any database-driven tool where I can map field to field, or fields to functions type of mapping? Drools was my first choice, but I don't know if it is my best choice? Maybe it is overkill for my needs? So, I was looking for advice on what might be the best tool to do my mapping.
Please let me know if you need any more information from me, and thanks for all the help!
Actually Orika can handle dynamic Data source like that, there is even an example on how to convert from XML Element (DOM API) or even JsonObject.
You can use an XML parser to convert your data into Element object, or Jackson to get JsonObject
Then define you class map between your "Canonical" Java Class and these dynamic "Classes"
http://orika-mapper.github.io/orika-docs/advanced-mappings.html Customizing the PropertyResolverStrategy
Here is an example of Orika mapping to MongoDB DBObject to Java Bean
https://gist.github.com/elaatifi/ade7321a1405c61ff8a9
However converting JSON is more straightforward than XML (the semantic of Attributes/Childs/Custom tags do not match with JavaBeans)
Some days I love my dba's, and then there is today...
In a Grails app, we use the database-migration plugin (based on Liquibase) to handle migrations etc.
All works lovely.
I have been informed that there is a set of db administrative meta data that we must support on every table. This information has zero use to the app.
Now, I can easily update my models to accommodate this. But that answer is ugly.
The problem is now at each migration, Liquibase/database-migration plugin, complains about the schema and the model being out of sync.
Is there anyway to tell Liquibase (or GORM) that columns x,y,z are to be ignored?
What I am trying to avoid is changesets like this:
changeSet(author: "cwright (generated)", id: "1333733941347-5") {
dropColumn(columnName: "BUILD_MONTH", tableName: "ASSIGNMENT") }
Which tries to bring the schema back in line with the model. Being able to annotate those columns as not applying to the model would be a good thing.
Sadly, you're probably better off defining your own mapping block and taking control of the Data Mapper (what Hibernate essentially is) yourself at this point. If you need to take control of the way the database-integration plugin handles migrations, you might wanna look at the source or raise an issue on the JIRA. Naively, mapping your columns explicitly in the domain model should allow you to bypass unnecessary columns from the DB.
So, I don't know if the question is explicit enough, but here's my problem:
I am writing a small application in VB.Net, that retrieves information from a website and present it to the user. Basically, I have written a class, which has a Get(URL) method which retrieves the webpage, reads it and populates the various Properties (Read-only) of the object.
This class works OK.
Now, I would like to store that information in a Database (I'm using Access for now), so that I can read the data from the DB, if the class gets called for a known URL. As I'm fairly new to OOP and completely new to DB usage in desktop applications (no problems in designing the DB though), I am not sure on how to proceed:
Should I put the database code in my existing class?
Should I create an extended class based on the existing one, adding the DB code?
Should I create a completely different class for the DB data and put the switch logic (read from DB or from web) in my application?
...
I realize that my question may sound silly to the most experienced of you, but I'm new to this and I would really like to learn how to do things the right way the first time!!!
Thanks!
This is what I would do:
Create a new class for the database code, and create an
interface for it that it implements.
Then create another class that has the code to fetch the web data. Make it implement the same interface.
Now you can subsitute either class to do your data access from your controller class.
Also, I usually put database and data access in separate projects from my service and ui classes, which are in their own classes, but that might be overkill for your situation.
If you'd like to read more on the subject, look up n-tier application design. The tier you're talking about here is data access.
http://en.wikipedia.org/wiki/Data_access_layer
I'd like to know your approach/experiences when it's time to initially populate the Grails DB that will hold your app data. Assuming you have CSVs with data, is is "safer" to create a script (with whatever tool fits you) that:
1.-Generates the Bootstrap commands with the domain classes, run it in test or dev environment and then use the native db commands to export it to prod?
2.-Create the DB's insert script assuming GORM's version = 0 and incrementing manually the soon-to-be autogenerated IDs ?
My fear is that the second approach may lead to inconsistencies for hibernate will have the responsability for the IDs generation and there may be something else I'm missing.
Thanks in advance.
Take a look at this link. This allows you to run groovy scripts in the normal grails context giving you access to all grails features including GORM. I'm currently importing data from a legacy database and have found that writing a Groovy script using the Groovy SQL interface to pull out the data then putting that data in domain objects appears to be the easiest thing to do. Once you have the data imported you just use the commands specific to your database system to move that data to the production database.
Update:
Apparently the updated entry referenced from the blog entry I link to no longer exists. I was able to get this working using code at the following link which is also referenced in the comments.
http://pastie.org/180868
Finally it seems that the simplest solution is to consider that GORM as of the current release (1.2) uses a single sequence for all auto-generated ids. So considering this when creating whatever scripts you need (in the language of your preference) should suffice. I understand it's planned for 1.3 release that every table has its own sequence.