Is it possible to list the clearcase views accessed in the last 12 months only? In a particular server, I want to list only the views accessed in a 12 months period. Since I am decommissioning this server, I want to keep the records of these view. Is it possible?
Any inputs are appreciated !!
Considering cleartool lsview, yo can use the -age option:
Reports when and by whom the view was last accessed. Because view access events are updated only every 60 seconds, lsview may not report all recent events.
This technote describes precisely what event will modify the "Last access" date of a view.
Only operations that result in a view database change will change this "last accessed" time.
These actions include:
Writing a view private object
Removing a view private file
Checking out a file (which creates a view-private copy of the checked-out version)
Checking in a file (which removes the view-private copy)
Creating a view
Write or create a derived object
Wink-in a derived object
Promote a derived object
Set a config spec
Actions such as starting the view, cd'ing in to a view, and setting to a view do not change the view configuration or database, and, thus, do not update the last accessed time.
Additionally, since ClearCase caches RPC results to improve performance, subsequent executions of cleartool lsview -age may not immediately reflect the most recent operation that changed the above "last accessed" time. The "last accessed" change may take up to 5 minutes to be reflected in the command's output.
If the "last accessed" is to be used in a script to delete views over a certain age, please note that this implementation issue may cause views that are in fact in use to be eligible for removal.
One example is a view that is created to hold trigger scripts that are under source control. This view's configuration may never change, and it may not be used for actual modifications to the trigger scripts. Any such views would have to be specifically excluded from removal.
Related
I have a large table in snowflake(terabytes per month) which gets loaded using an external S3 stage. I have loaded data into it using a wrong COPY command for a month. I know the pattern of the S3 objects thats got loaded and I store them in the table in one of the column.
Is there anyway for me to selectively remove already loaded metadata for this table. So the table forgets about these specific metadata being already loaded.
Idea is If I can then delete those records using the s3 Objects name and then load back those s3 objects again by fixing my COPY command ?
The other option I have is load these S3 objects into another table and perform an UPSERT.
But I am just checking if there is any option for selective removal of Loaded Metadata on Snowflake Table ?
Any answer is welcome . Thanks in advance.
I don't think you can alter the metadata.. (which really means I don't feel you can but also have not looked too hard, as I have other options)
You can
COPY INTO
[ FILES = ( '<file_name>' [ , '<file_name>' ] [ , ... ] ) ]
and this list the set of files
and use the FORCE = TRUE to make it ignore that meta list.
Which might be as hoop jumping as loading into another load, and then merging in. It depends if you are trying have it all auto-magic as part of your "normal process" and thus do all the other normal things.
But that begs the question if you have 100's or 1000's of files, and are relying on the "metadata/file age" to prune the list of files, to just the new's, your COPY INTO will suffer a performance hit of doing this, all the time. One way around this is to use highwater marks of dates, and step into the now. Which mean you could reset the highwater mark (this is how we handled the task) and our code dynamically loaded blocks in batch, so would normally be working in the "current hour" but if was weeks/months behind (the watermark was reset) it would load month chunks. So the data was not too much for the warehouse (and because we where load balancing many tables)...
Anyways. Those are some other options/thoughts.
Normally, an update of a snapshot view detects hijacks by examining file size and timestamp. Is there a way in clearcase to detect a file whose size and timestamp is unchanged but the file content has changed?
This isn't taken into account by ClearCase, since it assumes that, if the content has changed, the timestamp also has.
See "How the update operation determines whether a file is hijacked":
When a version is loaded into a snapshot view, the file size and last-modified time stamp (as reported by the UNIX® or Windows® file system) are recorded in the view database.
These values are modified each time you check out a file, check in a file, or load a new version into the view.
The update operation
When you update a view, the current size and last-modified time stamp of a non-checked-out file are compared with the size and time stamp that it recorded in the view database.
If either value is different from the value in the view database, the file is considered hijacked.
Changing only the read-only permission (on UNIX systems) or attribute (on Windows systems) of a non-checked-out file does not necessarily mean that the file is considered hijacked
The content isn't taken into account here.
The only instance where I had this case, I simply created another snapshot view and fired up a diff tool (WinMerge, KDiff3, BeyondCompare, ...), comparing the content of the two snapshot views.
I have a emdx file with update-able views. I made these views by following an example here where I delete the name and the type and leave just dbo:schema, however, every time I pick "Update Model from Database" these views and the entire definition including associations and such, get removed from the file.
To solve this problem, I end up doing a manual merge with the previous version, however, this is a really long and painful process.
Anyone know what I'm doing wrong?
Example of my declared update-able view:
<EntitySet Name="vw_MeterEmisHist" EntityType="Model.Store.vw_MeterEmisHist" Schema="dbo" />
I have had the same this happen when adding node to allow for mapping stored procedures to entities. The reason for this is that the XML formatted EDMX file is always completely auto generated when the model is updated (or created) from the database.
The easiest work around that I have found is to keep a text file within my solution with the changes that I have made so that they can be easily replaced. To speed things up, its possible to create a find/replace macro within Visual Studio to automate the process.
If anyone ever gets really bored, that sort of functionality would make a great add-in. (Or a great fix in VS. MS, are you listening?)
I'm at a client doing some quick fixes to their access application. It was a while I had a go with access, but I'm recovering quickly. However, I've discovered an interesting problem:
For some reports, I get a "Record is deleted" error. I've checked the reports, and it seems like there's a problem with one table. When opening that table, I find a record where all columns are marked "#deleted". So obviously, this row seems to be the culprit. However, when I try to delete that row, nothing really happens. If I re-open the table, the row still exists.
Is there a corruption in the db? How can I remove this record for good?
Edit: It's a MS2000-version
Solution: Simply compress/repair did not work. I converted the database to the 2003 file format instead, which did the trick. I've marked the first answer suggesting compress/repair, since it pointed me in the right direction. Thanks!
Have you tried the built in Access compact/repair tool? This should flush deleted records from the database.
The exact location varies according to the version of Access you're running, but on Access 2003 it's under Tools > Database Utilities > Compact and repair database. Some earlier versions of Access had two separate tools - one for compact, one for repair - but they were accessed from a similar location. If they are separate on the version the client has, you need to run both.
This should be a non-destructive operation, but it would be best to test this on a copy of the MDB file (apologies for stating the obvious).
Tony Toews, Access MVP, has a comprehensive guide to corruption:
Corrupt Microsoft Access MDBs FAQ
Some corruption symptoms
Determining the workstation which caused the corruption
Corruption causes
To retrieve your data
As an aside, decompile is very useful for sorting out odd happenings when coding and for improving start-up times.
you can also try this Command line utility
//andy
Compacting and importing won't fix the problem for the error reported, which is clearly a corrupted pointer for a memo field. The only thing you can do is delete and recreate the record that is causing the problem. And you need to find ways to edit memo data (or eliminate memo fields -- do you really need more than 255 characters or not?) that does not expose you to corruption risk. That means avoiding bound controls on forms for memo fields.
Instead, use an unbound textbox, and in the form's OnCurrent event, assign the current data from the form's underlying recordsource:
Me!txtMyMemo = Me!MyMemo
To save edits to the unbound control, use the control's AfterUpdate event:
Me!MyMemo = Me!txtMyMemo
Me.Dirty = False ' save the whole record
Why are memo fields subject to corruption? Because they aren't stored in the same data page as the non-memo fields, but instead, all that is in the record's main data page is a pointer to some other data page (or set of data pages if it's a large chunk of data) where the actual memo data is stored. If it weren't done this way, a record with a memo in it would very quickly exceed the maximum record length.
The pointer is relatively easily corrupted, most often by a fatal problem during editing in a bound control. Editing with an unbound control does not eliminate the problem entirely, but means that the time in which you're exposed to danger is very, very short (i.e., the time it takes for those two lines of code to execute in the AfterUpdate event).
Aside from the options already posted above, I've used another simple method aswell: Simply create a new MDB file and import all objects from the corrupted one. Don't forget to get system and/or hidden objects when you go this way.
I have an access 'application' (.adp file), when it opens i have it update an admin database with the username and time open. When it closes it updates the admin database with username time closed - these are sperate records in the events table so it looks like
username,dbaction,time
bob,open,13:00
gareth,open,13:05
bob,close,14:00
If the user where to open the db twice there would be 2 open and 2 close actions recorded but no way to establish which database session each of the 2 close events belonged.
What i want to store in this table is a unique identifier to link the open and close actions together with 'each session'. Preferably i would like to use a property ov the application object in vba if something exists. Does it even store the time the db was opened? I could generate my own id when databases are opened and store it in a variable until close, but id prefer to use something in built. Any ideas?
I do this using a hidden unbound form which opens on startup. In that form I insert the record into the table. I then fetch the autonumber ID (or whatever SQL Server calls that field.) of that record and store in a text control. If you do any development and you hit an error and reset the running code you lose all global variables thus I prefer using forms to store these kinds of variables.
In the hidden forms On Close event I then update the same record with the date/time exited. I've been using this technique for well over a decade without any problems.
You could have a global 'Id_session' variable, initiated at startup (random generated uniqueIdentifier for example) that you will store in an 'id_Session' column of your 'event' table. When the database is opened, the record is inserted in the 'event' table, and the record identifier is stored as a global variable. When the database is closed, the existing record is identified (through the id_session value) and updated in the database.
In fact, I do not understand the interest of an inbuilt identifier instead of this solution.
There is likely a method that runs in an access database when it is opened. In this you could generate a random identifier and store it in a global variable. When writing your log line you could include this identifier allow you to trace login and logout.
Update: You can use the code shown here to generate a GUID which is pretty much gauranteed to be unique so this should do what you want. If it doesnt you might need to clarify as I'm not understanding the question.
All these answers seem to me to be too clever by half.
What I do is add an Autonumber field to the table I'm logging these in, and then capture the Autonumber value of the startup event record and store it somewhere in (usually in a hidden field on the app's startup form) for use at shutdown to write the shutdown event with the ID number of the startup event.
is HWND going to be unique for 2 access applications opened at the same time on possibly on the same pc or 2 different pcs?