How to select many diagram symbols of tables from search results? - powerdesigner

After doing some sophisticated search of tables with several criterias (part of name, specific stereotype, etc.) and got result list of many tables. It's not clear how to select all of them at once on diagram (for example to group them, or move to separate space, or apply the same format for all of them like background color)?
The only things we could do with results looks like are limited with "Find in diagram" for a single selected table. Is it possible somehow to workaround this limitation, and be able to "Select in diagram all (or several multiselected) tables" from search result list ?
PS: using PowerDesigner 16.5 SP3 at Physical Model mode.
PPS: the current workaround is doing a serch by Vbscripts coding, and also manipulating symbols format from VBScripts. It's rather inconvinient to write code for simple GUI manipulations, which could be done manualy for selected objects. I hope for better workaround...

Related

Best way to condense several columns of the same data?

The question doesn't do the problem justice, so let me try and explain. I'm reorganizing a database which has (amongst other things) 20 fields with tools to be used for a job. Not every job needs 20 tools, some require 1 or 2 or 3.
I was thinking I could grab the info from the 20 existing fields and plop them into a table called tbl_Tools with a single tools field and a key. Then, in the main table remove the 20 fields and add one single field with a reference to the key of tbl_Tools and condense the info of the 20 fields into a single string variable where each tool is separated by a comma (this only has to be done because the location of each tool is important to save). Figure below has a basic explanation.
It this optimal or is there a better way of doing this I'm not figuring out? I'd love to hear your feedback.
Thanks in advance, Rafael.
(Also not sure which tags to use for this)
Book answer to your question would be to use two tables storing separate entries (TBL_INFO and TBL_TOOLS) and third table which would store connections between them using two foreign keys (TBL_INFO_TOOLS_REL), just like this.
This way you're minimizing amount of empty columns but also keep clean database without columns that store multiple keys glued together using separators. This allows much simpler management of tools required for certain jobs.
Probably what you want is many to many relationship (this will consist of 3 tables see the image below)
Comma Separated solution would work if your application deals with the logic of splitting and joining tool IDs but this way you lose the ability to query for tool specific data i.e. "which toll was used the most" or "who used tool_3" etc.

how to store static data that will be localized?

I am developing a health care system and I want the doctor when starting to type a diagnosis instead of typing it , he can select from a list that will be displayed for him.
the list contains diseases or symptoms that will then be inserted into database in a diagnosis table.
I did that because of two reasons:
I want all doctors to use the same list of symptoms when writing their diagnosis to work on that data later on, instead of each one typing his own way.
The data will be localized and translated to different languages when displayed to different regions.
I am facing a problem here, should i put all these in a lookup table in a database or a config file ? given that number of rows are 3000 in 7 languages ( each language will have it's own column ) and i may at anytime add new data or remove.
I would put them in a database. I find it easier to maintain, and faster to query than a config file.

Solr: facets merge based on mapper

I was wondering if there's a way in Solr 4 or higher, to merge facets together based on a mapper. I have a lot of products inside my database with terms (inside the taxonomy "color" for example) which actually mean the same. For example the color red is described as:
red
bordeaux
light red
dark red
I would like to merge such terms, cause I don't want to bother the user with hundreds of choices, when I can reduce that number to dozens. I'm trying to figure out what's the best way to do this, create a separate table inside my database (used to map the id's of terms together) or use functionality in Solr to do this on index-time. I read something about Pivot Facets, but I guess that's the way to go if there is already an hierarchy? Because those different terms of red are just flat (are not grouped together, yet). Any advices?
EDIT:
Guess I found a solution: http://www.wunderkraut.com/blog/how-to-combine-two-facet-items-in-facet-api/2015-03-26. Any thoughts about this? Looks good to me though.

Sql Server Full Text: Human names which sound alike

I have a database with lots of customers in it. A user of the system wants to be able to look up a customer's account by name, amongst other things.
What I have done is create a new table called CustomerFullText, which just has a CustomerId and an nvarchar(max) field "CustomerFullText". In "CustomerFullText" I keep concatenated together all the text I have for the customer, e.g. First Name, Last Name, Address, etc, and I have a full-text index on that field, so that the user can just type into a single search box and gets matching results.
I found this gave better results that trying to search data stored in lots of different columns, although I suppose I'd be interested in hearing if this in itself is a terrible idea.
Many people have names which sound the same but which have different spellings: Katherine and Catherine and Catharine and perhaps someone who's record in the database is Katherine but who introduces themselves as Kate. Also, McDonald vs MacDonald, Liz vs Elisabeth, and so on.
Therefore, what I'm doing is, whilst storing the original name correctly, making a series of replacements before I build the full text. So ALL of Katherine and Catheine and so on are replaced with "KATE" in the full text field. I do the same transform on my search parameter before I query the database, so someone who types "Catherine" into the search box will actually run a query for "KATE" against the full text index in the database, which will match Catherine AND Katherine and so on.
My question is: does this duplicate any part of existing SQL Server Full Text functionality? I've had a look, but I don't think that this is the same as a custom stemmer or word breaker or similar.
Rather than trying to phonetically normalize your data yourself, I would use the Double Metaphone algorithm, essentially a much better implementation of the basic SOUNDEX idea.
You can find an example implementation here: http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=13574, and more are listed in the Wikipedia link above.
It will generate two normalized code versions of your word. You can then persist those in two additional columns and compare them against your search text, which you would convert to Double Metaphone on the fly.

How to implement an Enterprise Search

We are searching disparate data sources in our company. We have information in multiple databases that need to be searched from our Intranet. Initial experiments with Full Text Search (FTS) proved disappointing. We've implemented a custom search engine that works very well for our purposes. However, we want to make sure we are doing "the right thing" and aren't missing any great tools that would make our job easier.
What we need:
Column search
ability to search by column
we flag which columns in a table are searchable
Keep some relation between db column and data
we provide advanced filtering on the results
facilitates (amazon style) filtering
filter provided by grouping of results and allowing user to filter them via a checkbox
this is a great feature, users like it very much
Partial Word Match
we have a lot of unique identifiers (product id, etc).
the unique id's can have sub parts with meaning (location, etc)
or only a portion may be available (when the user is searching)
or (by a decidedly poor design decision) there may be white space in the id
this is a major feature that we've implemented now via CHARINDEX (MSSQL) and INSTR (ORACLE)
using the char index functions turned out to be equivalent performance(+/-) on MSSQL compared to full text
didn't test on Oracle
however searches against both types of db are very fast
We take advantage of Indexed (MSSQL) and Materialized (Oracle) views to increase speed
this is a huge win, Oracle Materialized views are better than MSSQL Indexed views
both provide speedups in read-only join situations (like a search combing company and product)
A search that matches user expectations of the paradigm CTRL-f -> enter text -> find matches
Full Text Search is not the best in this area (slow and inconsistent matching)
partial matching (see "Partial Word Match")
Nice to have:
Search database in real time
skip the indexing skip, this is not a hard requirement
Spelling suggestion
Xapian has this http://xapian.org/docs/spelling.html
Similar to google's "Did you mean:"
What we don't need:
We don't need to index documents
at this point searching our data sources are the most important thing
even when we do search documents, we will be looking for partial word matching, etc
Ranking
Our own simple ranking algorithm has proven much better than an FTS equivalent.
Users understand it, we understand it, it's almost always relevant.
Stemming
Just don't need to get [run|ran|running]
Advanced search operators
phrase matching, or/and, etc
according to Jakob Nielsen http://www.useit.com/alertbox/20010513.html
most users are using simple search phrases
very few use advanced searches (when it's available)
also in Information Architecture 3rd edition Page 185
"few users take advantage of them [advanced search functions]"
http://oreilly.com/catalog/9780596000356
our Amazon like filtering allows better filtering anyway (via user testing)
Full Text Search
We've found that results don't always "make sense" to the user
Searching with FTS is hard to tune (which set of operators match the users expectations)
Advanced search operators are a no go
we don't need them because
users don't understand them
Performance has been very close (+/1) to the char index functions
but the results are sometimes just "weird"
The question:
Is there a solution that allows us to keep the key value pair "filtering feature", offers the column specific matching, partial word matching and the rest of the features, without the pain of full text search?
I'm open to any suggestion. I've wondered if a document/hash table nosql data store (MongoDB, et al) might be of use? ( http://www.mongodb.org/display/DOCS/Full+Text+Search+in+Mongo ). Any experience with these is appreciated.
Again, just making sure we aren't missing something with our in-house customized version. If there is something "off the shelf" I would be interested in it. Or if you've built something from some components, what components (search engines, data stores, etc) did you use and why?
You can also make your point for FTS. Just make sure it meets the requirements above before you say "just use Full Text Search because that's the only tool we have."
I ended up coding my own.
The results are fantastic. Users like it, it works well with our existing technologies.
It really wasn't that hard. Just took some time.
Features:
Faceted search (amazon, walmart, etc)
Partial word search (the real stuff not full text)
Search databases (oracle, sql server, etc) and non database sources
Integrates well with our existing environment
Maintains relations, so I can have a n to n search and display
--> this means I can display child records of a master record in search results
--> also I can search any child field and return the master record
It's really amazing what you can do with dictionaries and a lot of memory.
I recommend looking into Solr, I believe it will meet you needs:
http://lucene.apache.org/solr/
For an off-she-shelf solution: Have you checked out the Google Search Appliance?
Quote from the Google Mini/GSA site:
... If direct database indexing is a requirement for you, we encourage you to consider the Google Search Appliance, which has direct database connectivity.
And of course it indexes everything else in the Googly manner you'd expect it to.
Apache Solr is a good way to start your project with and it is open source . You can also try Elastic Search and there are a lot of off shelf products which offer good customization abilities and search features such as Coveo, SharePoint Fast, Google ...

Resources