Currently we are working on Laboratory domain.
The Laboratory domain embraces many profiles and each profiles consist of lots of actors.
Where LAB TF mentioned LAB-n several times.
In example:
LAB-1~5 (5)
LAB-21~23 (3)
LAB-26~31 (6)
LAB-51 (1)
LAB-61~62 (2)
What are they actually?
Machine, Device, Actor or anything else?
What are they use for?
I don't know where is the correct dictionary-style definition and I failed to find one on the www.ihe.net website, but my translation is:
they are actually transaction identifiers.
Some short sequences of letters and digits uniquely identifying the "process/scenario/sequence of events/unit of change/use case/.." you are talking about.
A patient came to the doctor, a nurse took his blood sample and send it to the laboratory in order to diagnose what kind of bad-specimens live in there. After some time of analyzing and cultivation the results are know, travel back to the doctor, who then picks a diagnosis and proposes a treatment.
This flow of events can be described as sequence of transactions. The details may vary, but the concepts are approximately the same, regardless of the country, town, hospital, gender. Uniquely identified "steps" where the actors are typically mapped to (or supported by) communicating software components. The more compatible the "steps" or transactions are the easier it is to integrate equipment, people, software coming from different cultures or vendors. IHE attempts to identify those patterns and give them names - transaction identifiers
For instance LAB-26 describes (my interpretation) what happens when the analyzer device (called "(LD)Pre/PostAnalyzer") detects a specimen and needs to send notification to the automatic test scheduler (software component called "Automation Manager") saying that the result is known and can be further processed, e.g. that laboratory worker can take the sample tubes out of the machine and insert another set and the laboratory doctor can schedule another (refining) set of tests for this sample
See also:
chapter "2.2 The generic IHE Transaction Model" in your document
Related
(https://i.stack.imgur.com/VYkV6.png) :
I'm asked to design a relational database to keep data to answer clinic operation queries such as:
● List the patient appointments for each doctor for a given date.
● When a patient rings to make an appointment, give the available time slots for a given date.
● Retrieve the address of patients to send notices via mail services.
I have one database schema of one relation as shown below, but I was wondering whether there were any mistakes I've made?
ABC(doc-name, doc-gender, registration_num, qualification, pat-name, pat-gender, DOB, address, phone-num, appoint-date, appoint-time, type)
Is the use of words such as date and the use of hyphens generally discouraged? Are there any other weaknesses in my design?
Thank you
So, that's not a schema or a design. Not for a relational database, which, based on the tags for the question, is what you're looking for. That's the storage definition for an ID/Value style of database. If you're looking for actual relational storage, you should be building out those relationships through the process of normalization.
For example, let's start at the beginning with doc-name (I am personally not crazy about using hyphens, but it's not a showstopper, so at least on that note, be sure whichever RDBMS you're working with supports them in the name and then you're good to go). If we think about this just from a data entry stand point, we don't want to have to type in the name of the doctor every time we use that doctor. Instead, we'd want to pull that from a list. So, clearly, we can break that apart from the rest of the information. There is the beginning of our normalization process. We can also easily note the fact that a patient is likely to have more than one appointment. Under the current structure, we'd have to re-enter every bit of patient information prior to the appointment. There's another place where we'd break this apart.
There is tons more to this simple example that could be split out and normalized.
I'd suggest you read up on data normalization. My favorite teacher on the subject is Louis Davidson. Here's his book on the topic. Read that and then try to readdress the situation you're facing.
I'm assuming this isn't just homework. If it is, currently, I'd give you an "F". If it isn't, you should track down someone to give you hand with this database design. You won't be able to quickly read Louis' book on the topic and turn around even a rough working design in any reasonable period of time.
I have to second what Grant said, this is not a relational design at all.
Stop and ask yourself for example what happens if Steven Arrow has to take an afternoon off and update his schedule. You need to be very careful updating the database lest you reassign all his patients.
Spending a total of 5 minutes on this, I see at the very least:
A Doctors table, a Patients Table, and probably a table of open appointment times (which btw, is a bit harder than you think, so you have to give some thought how to handle that and some reading up on tables for scheduling).
That's for starters. I might break out Patients phone numbers to its own table. Why? Well how many columns do you want have for phone numbers? 1? What if they have a work AND home number? Or a Work and Cell and Home? And more.
The concept you're looking for is normal forms. You don't need to go overboard, but generally 3NF is about right.
I was wondering if you guys could look at this database design and tell me if it normalized to the best it can be. Likewise, if you see any problems or improvements.
The database is essentially has one to many relationship and is made for households entering into an assistive living program. Each year we have them complete the same form to see if any changes have occurred and analyze the data.
Head of household: Information that I do not believe will be changing. Unique to the social provided.
Contact & Address: I separated address from contact because people that enter this program are required to be living in selected homes. Likewise, people do not stay in this program forever. So it isn't uncommon for us to see different households living in the same address over a period of time.
Income: basically analyzing what they had at the beginning and what they developed over time. So unearned_entry and earned_entry would be consistent but earned and unearned will be fluctuating from each form.
Household_information: for veteran status and disabilities status, although unlikely, can still change. In regards to these two attributes, it's either yes or no (either you have someone disabled in your household or you don't). For the household size and minors, this could be potentially different since relatives may join the household, or children become adults, etc.
Program Information: attributes about each form basically. Intake date - will stay consistent (when they entered the program). Transaction type ( will either be entry, update, or close). Exit date ( will be null until transaction type is close. ).
Am designing a database for a credit bureau and am seeking some guidance.
The data they receive from Banks, MFIs, Saccos, Utility companies etc comes with various types of IDs. E.g. It is perfectly legal to open a bank account with a National ID and also a Passport. Scenario One that has my head banging is that Customer1 will take a credit facility (call it loan for now) in bank1 with the passport and then go to bank2 and take another loan with their NationalID and Bank3 with their MilitaryID. Eventually when this data comes from the banks to the bureau, it would be seen as 3 different people while we know that its actually 1 person. At this point, there is nothing we can do as a bureau.
However, one way out (for now) is using the Govt registry which provides a repository which holds both passports and IDS. So once we query for this information and get a response, how do I show in the DB that Passport_X is related to NationalID_Y and MilitaryNumber_Z?
Again, a person's name could be captured in various orders states. Bank1 could do FName, LName, OName while Bank3 can do LName, FName only. How do I store this names?
Even against one ID type e.g. NationalID, you will often find misspellt names or missing names. So one NationalID in our database could end up with about 6 different names because the person's name was captured different by the various banks where he has transacted.
And that is just the tip of the iceberg. We have issues with addresses, telephone numbers, etc etc.
Could you have any insight as to how I'd structure my database to ensure we capture all data from all banks and provide the most accurate information possible regarding an individual? Better yet, do you have experience with this type of setup?
Thanks.
how do I show in the DB that Passport_X is related to NationalID_Y and MilitaryNumber_Z?
Trivial.
You ahve an identity table, that has an AlternateId field if the Identity is linked to another one. Use the first IDentity you created as master. Any alternative will have AlternateId pointing to it.
You need to separate the identity from the data in it, so you can have alterante versions of it, possibly with an origin and timestampt. You need oto likely fully support versioning and tying different identities to each other as alternative, including generating a "master identity" possibly by algorithm with the "official" version of your data (i.e. consolidated).
The details are complex - mostly you ahve to make a LOT of compromises without killing performance, so at the end HIRE A SPECIALIST. There is a reason there are people out as sensior database designers or architects that have 20+ years experience finding the optimal solution given the constrints you may not even be aware of (application wise).
Better yet, do you have experience with this type of setup?
Yes. Try financial information. Stock symbols / feeds / definitions are not necessariyl compatible and vary by whom you get it. Any non-trivial setup has different data feeds that may show the same item slightly different, sometimes in error. DIfferent name, sometimes different price (example: ES, CME group, is 50 USD per point, but on TT Fix it is 5 - to make up, the price is multiplied by 10, so instad of 1000.25 you get 10002.5). THis is the same line of consolidation, and it STINKS.
Tons of code, tons of proper database design, redoing it half a dozen time to get the proper performance. THis is tricky, sadly.
A webapp called StatSheet got funded today (August 4th, 2010)
http://techcrunch.com/2010/08/04/former-crunchies-finalist-statsheet-recieves-1-3-million-in-series-a/
They are doing 'automated journalism' - using computers to generate human-looking reports of sports games from the statistics
http://www.guardian.co.uk/media/pda/2010/mar/30/digital-media-algorithms-reporting-journalism
Does anyone have any insight into what approach/algorithms are being used to do this / how it might be replicated ?
The details for projects like this are a little sparse, but it looks like the baseball summarizer Stats Monkey consists of:
Statistical model: They build a model of how baseball games typically unfold, most likely by looking at how certain variables (e.g. runs, at bats, etc.) change during the course of a game or differ from what you'd expect to see going into the game (e.g. a no-name team scores more runs than a highly-favored team). How well a given game fits (or doesn't fit) this model gives them an idea of what might be interesting about that game (e.g. key plays or players).
Text generation: Given a library of pre-written narrative arcs (e.g. back-and-forth game, come-from-behind victory, etc.) they use the "interesting information" from the model of the game to construct a summary of the game. I'm not sure, but it looks like they use a decision tree -- conditioned on the information from the model -- to select one of these arcs.
Miscellaneous glue: This isn't mentioned in their writeup, but there I'd imagine that there are a fair number of hard-coded rules that "glue" the main narrative arcs into a single, cohesive story.
The authors of Stats Monkey have done a fair amount of research in related areas, like website summarization and automatic content aggregation and generation. Here are a few papers that might be interesting:
Nathan Nichols and Kristian Hammond. “Machine-Generated Multimedia Content.” Proceedings of the Second International Conference on Advances in Computer-Human Interactions, 2009.
Nathan Nichols, Lisa Gandy, and Kristian Hammond. "From Generating to Mining: Automatically Scripting Conversation Using Existing Online Sources." The Proceedings of the Third International Conference on Weblogs and Social Media, 2009.
J. Liu and L. Birnbaum. 2008. "LocalSavvy: Aggregating Local Points of View about News Issues". WWW 2008 Workshop on Location on the Web.
A few associates and myself are starting an EMR project (Electronic Medical Records). I have heard talk in the past - and more so lately - about a standard record format - to facilitate the transferring of records when appropriate (HIPAA) from one facility to another. Has anyone seen any information on this?
You can look to HL7 for interoperability between systems (http://www.hl7.org/). Patient demographic information and textual notes can be passed. I've been out of the EMR space too long to know if any standards groups have done anything interesting of late. A standard format that maintains semantic meaning is a really, really difficult problem. See SnoMed (http://www.nlm.nih.gov/research/umls/Snomed/snomed_main.html) for one long-running ontology effort -- barely the start of a rich interchange format.
A word of warning from someone who spent several years with an upstart EMR vendor...This is a very hard business to be in. Sales cycles for large health systems literally can take years, and the amount of hand-holding required for smaller medical practices can quickly erode margins. Integration with existing practice management systems is non-standard, even if those vendors claim otherwise. More and more issues abound. I'm not sure that it's a wise space for an unfunded start-up to enter.
I think it's an error to consider HL7 to be a standard in the sense you seem to mean. It is heavily customized and can be quite different from one customer to the next. It's one of those standards with too much flexibility.
I recommend you read the standard (which should take you a while), then try to find a community of developers working with the standard. Ask them for horror stories, and be prepared for what you'll hear.
A month late, but...
The standard to shoot for is definitely HL7. It is used in many fields, so is highly customizable but there is a well defined standard for healthcare. Each message (ACK, DSR MCF), segment (PID, PV1, OBR, MSH, etc), sequence and event type (A08, A12, A36) has a specific meaning regardless of your system of choice.
We haven't had a problem interfacing MiSYS, Statlan, Oacis, Epic, MUSE, GE Centricity/Lastword and others sending DICOM, ADT, PACS information between the systems we have in use. Most of these systems will be set up with an interface engine to tweak messages where needed, so adding a way to filter HL7 messages as they come through to your system, and as they go out to the downstreams, would be a must.
Even if there would be a new "presidential standard" for interoperability, and I would hazard a guess that it will be HL7 anyway, I would build the system with HL7 messaging as this is currently the industry accepted standard.
While solving interoperability, you shouldn't care only about the interchange format, the local storage formats should be standardized also, to simplify the transformation to the interchange format and vice versa.
openEHR is a great format for storage, it is more expressive than HL7 v2, v3 and CDA, so it can be transformed easily to any of those. The specs are open and here: http://openehr.org/programs/specification/releases/1.0.2
For the interchange format, any of HL7 v2, v3 and CDA are good. Also consider CCR and CCD.
http://www.aafp.org/practice-management/health-it/astm.html
If you want to go outside HL7 thinking and are looking for an comprehensive EMR or EHR with a specified record format rather than a record extract message interchange format, then have a look at openEHR, http://www.openehr.org/. The ISO 13606 extract standard is (almost) a subset of openEHR. You will also find open source reference libraries and openEHR implementations of different maturity available in Java, .NET, Ruby, Python, Groovy etc.
Some organisations are also producing HL7 artifacts like CDA as output from openEHR based EHR/EMR systems.
Have a look at the Continuity of Care Record--IIRC, that's what Google Health uses for input. It's not an HL7-family standard (there's a competing HL7-family standard--don't recall what it's called off-top).
There likely will not be a standard medical record format until the government dictates the format of one and requires its use by force of law.
That almost assuredly will not happen without socialized national health care. So in reality zero chance.
its correct answer but i think some add about meaningful use of emr..... Officials Announce ‘Meaningful Use,’ EHR Certification Criteria
Last week, CMS released proposed regulations defining the “meaningful use” of electronic health records, Reuters reports (Wutkowski/Heavey, Reuters, 12/31/09).
In addition, the Office of the National Coordinator for Health IT released an interim final rule describing the required certification standards for EHR technology (Simmons, HealthLeaders Media, 12/31/09).
Under the 2009 federal economic stimulus package, health care providers who demonstrate meaningful use of certified EHRs will qualify for incentive payments through Medicaid and Medicare.
Officials will offer a 60-day public comment period after both regulations are published in the Federal Register on Jan. 13. The interim final rule on EHR certification is scheduled to take effect 30 days after publication (Goedert, Health Data Management, 12/30/09). http://www.myemrstimulus.com/
This is a very hard problem because data collection starts with an MD and the only coding they know (ICD and CPT) is all about billing, not anything likely to be of use between providers (esp. in a form where the MD can be held legally liable). And they hate even that much paperwork.
Add to that the fact that HIPAA dictates that the patient not the provider owns the data. Not that they could understand it or do anything useful with it if they had it.
Good luck. Whatever happens will result from coercion by the govt and be a long long time coming IMHO.
Interestingly the one source of solid medical info turns out to be the VA (because they don't have the adversarial issues of payment and legal liability.) Go figure. That might be a good place to start for a standard with any existing data and some momentum, though. Here's another question with some info.