I frequently use my development datasets (PO's mainly) such as: code, JCL, output datasets. And then I have few prod datasets that I frequently access as well. How do I go about editing the screen to customize the datasets. I see more spaces but only one line for input dataset is given by IBM.
Is there any way to edit this without disrupting anything else ?
Thanks in advance.
This is normally done via a personal dataset list. Once created, you can key dslist <listname> on any Command line on any ISPF screen and get a 3.4 style list of your datasets.
EZYEDIT is a productivity tool from Broadcom Inc. which allows you to create and save lists of frequently used datasets. Each dataset name can be assigned to a number. You can invoke the datasets by inputting the appropriate number. Upto 996 dataset names can be stored.
More info here
Related
I am developing a health care system and I want the doctor when starting to type a diagnosis instead of typing it , he can select from a list that will be displayed for him.
the list contains diseases or symptoms that will then be inserted into database in a diagnosis table.
I did that because of two reasons:
I want all doctors to use the same list of symptoms when writing their diagnosis to work on that data later on, instead of each one typing his own way.
The data will be localized and translated to different languages when displayed to different regions.
I am facing a problem here, should i put all these in a lookup table in a database or a config file ? given that number of rows are 3000 in 7 languages ( each language will have it's own column ) and i may at anytime add new data or remove.
I would put them in a database. I find it easier to maintain, and faster to query than a config file.
I have dumped a lot of shapefiles into a postgis database and will be using this for a GIS application through QGIS. Now, I have developed a data dictionary that is sensible and intuitive for the end user on the trimble gathering data. So this is how the initial table is generated, with end user on trimble being top priority. Now when I use this gathered data in qgis the attribute column that shows up for selected points on some feature classes are not very simplified.
For example, we are a pipeline utility, and I have the trimble set up so that when they select a nominal pipe diameter, it gives options for different wall thicknesses based on that pipe size. This works well on the trimble, but then I get 8 or 9 blank values for every piece of pipe in the map. So if I select a 2" main, the attributes display the size, and wall thickness fields for every size from 3/4" to 10" but the only one that has a value is for the 2". What I would like to do, is create a new table that simplifies this, and then run the data from the table housing the trimble generated data, and dump it into the new table. This will not only require importing the data from one postgres table "Main" in the db "GIS" into a new table "Main" in my trial database, but will also require some code to search through columns a b c d etc. to find the one that that isn't null, and send it to the new table's Wall Thickness column.
I have several properties, and several tables that have these problems, but I think if someone can help or at least point me in the right direction on this particular situation, I can do the rest of them. This is seriously the last step in my GIS build before we are ready to start rolling it out, and I would really appreciate if someone could help me clean this up a bit.
I have pgadmin 4, qgis 2.18, some knowledge on SQL and can manipulate data from the command line, and I see that qgis has a built in python console that may be able to help. I know python a bit, and could probably get by if the best route is through it. Just a bit of information on myself to help you guys determine my best route. Thanks again!
p.s. I have added pictures of my existing data structure for the "Main" feature in qgis, and a picture of the new table I'd like to populate for the "Main" feature in command line. http://imgur.com/a/bkUqS
I myself am a pipeline surveyor turned self-taught programmer. So I hope I understand the first part of your dilemma. Each piece of pipe can be of different wall thickness values. Example: mainline pipe is 0.250 and say all the road bore and HDD pipe is 0.300 wall for the heavy wall pipe.
In your query on your 'raw' field data that you have first, can you try something like
SELECT *
FROM your_table
WHERE wall_thickness_value IN (0.250,0.300)
Since no pipe would normally have two wall thickness values without some type of transition weld being between them hopefully, this query will grab the actual value for you for that piece of pipe and not return all the nulls.
After doing some sophisticated search of tables with several criterias (part of name, specific stereotype, etc.) and got result list of many tables. It's not clear how to select all of them at once on diagram (for example to group them, or move to separate space, or apply the same format for all of them like background color)?
The only things we could do with results looks like are limited with "Find in diagram" for a single selected table. Is it possible somehow to workaround this limitation, and be able to "Select in diagram all (or several multiselected) tables" from search result list ?
PS: using PowerDesigner 16.5 SP3 at Physical Model mode.
PPS: the current workaround is doing a serch by Vbscripts coding, and also manipulating symbols format from VBScripts. It's rather inconvinient to write code for simple GUI manipulations, which could be done manualy for selected objects. I hope for better workaround...
I'm a newbie to app development. I'm using Xcode 4.3.2. I'm attempting to develop an app using a tab bar with a table view. In the table view I need to list about 100 cities and info about those 100 cities when the user selects one. Basically, I already have that data about the cities in a Excel spreadsheet.
I can't really find good examples of what I want to achieve. I've heard the terms parsing XML, SQLite, Core Data, database, etc, and I'm not sure if that is what I need to do.
I'd thankfully accept any suggestions.
If the data in the table are changing or edited, then by using a database, you will avoid rolling a new patch with those minor changes (you just change the values in the db)
If the data is the same and won't change for a long time and you plan to patch the application, then you just need a source for that data (the spreadsheet)
For parsing the data, you can use anything, when taking about showing 100 cities, it depends how big the total data you will be querying, how fast it needs to be and you just need to benchmark it.
If you are querying about 500k records and you need to do some 'figuring out' and it takes too long to load. Then, transforming your data into xml then parsing it may give you better performance.
You have to at least design your way into what you want to achieve. Check the performance and tweak it to find the decent spot.
Right now I look at it as tackling an unknown problem. Spend some time and build something. This will help you see the potential problems better.
While databases are good, for a few hundred elements you can tolerate inefficiency. If your existing data are in an Excel spreadsheet, the easiest way to get them into your app is to export the Excel spreadsheet to Comma-Separated-Values (CSV), then make your app read CSV files. (If your Excel spreadsheet has multiple worksheets, you'll need to convert each separately.)
How do you parse CSV? See iPhone : How to convert CSV format into NSData or NSString?
You'll end up with arrays of arrays of NSString. You'll probably need to define a new class for your city data, and convert each row in the imported data to one city element.
If you need to know more, posting a few rows from your spreadsheet may help.
I have a location auto-complete field which has auto complete for all countries, cities, neighborhoods, villages, zip codes. This is part of a location tracking feature I am building for my website. So you can imagine this list will be in the multi-millions of rows. Expecting over 20 million atleast with all the villages and potal codes. To make the auto-complete work well I will use memcached so we dont hit the database always to get this list. It will be used a lot as this is the primary feature on the site. But the question is:
Is only 1 instace of the list stored in memcached irrespective of the users pulling the info or does it need to maintain a separate instance for each? So if say 20 million people are using it at the same time, will that differ from just 1 person using the location auto-complete? I am open to other ideas also on how to implement this location auto complete so it performs well.
Or can i do something like this: When a user logs in in the background I send them the list anyways, so by the time they reach the auto complete textfield their computer will have it ready to load instant?
Take a look at Solr (or Lucene itself), using NGram (or EdgeNGram) tokenizers you can get good autocomplete performance on massive datasets.