Does flyway migrations support PostgreSQL's COPY? - database

Having performed a pg_dump of an existing posgresql schema, I have an sql file containing a number of table population statements using the copy.
COPY test_table (id, itm, factor, created_timestamp, updated_timestamp, updated_by_user, version) FROM stdin;
1 600 0.000 2012-07-17 18:12:42.360828 2012-07-17 18:12:42.360828 system 0
2 700 0.000 2012-07-17 18:12:42.360828 2012-07-17 18:12:42.360828 system 0
\.
Though not standard this is part of PostgreSQL's PLSQL implementation.
Performing a flyway migration (via the maven plugin) I get:
[ERROR] Caused by org.postgresql.util.PSQLException: ERROR: unexpected message type 0x50 during COPY from stein
Am I doing something wrong, or is this just not supported?
Thanks.

The short answer is no.
The one definite problem is that the parser is currently not able to deal with this special construct.
The other question is jdbc driver support. Could you try and see if this syntax generally supported by the jdbc driver with a single createStatement call?
If it is, please file an issue in the issue tracker and I'll extend the parser.
Update: This is now supported

I have accomplished this for Postgres using
public abstract class SeedData implements JdbcMigration {
protected static final String CSV_COPY_STRING = "COPY %s(%s) FROM STDIN HEADER DELIMITER ',' CSV ENCODING 'UTF-8'";
protected CopyManager copyManager;
#Override
public void migrate(Connection connection) throws Exception {
log.info(String.format("[%s] Populating database with seed data", getClass().getName()));
copyManager = new CopyManager((BaseConnection) connection);
Resource[] resources = scanForResources();
List<Resource> res = Arrays.asList(resources);
for (Resource resource : res) {
load(resource);
}
}
private void load(Resource resource) throws SQLException, IOException {
String location = resource.getLocation();
InputStream inputStream = getClass().getClassLoader().getResourceAsStream(location);
if (inputStream == null) {
throw new FlywayException("Failure to load seed data. Unable to load from location: " + location);
}
if (!inputStream.markSupported()) {
// Sanity check. We have to be able to mark the stream.
throw new FlywayException(
"Failure to load seed data as mark is not supported. Unable to load from location: " + location);
}
// set our mark to something big
inputStream.mark(1 << 32);
String filename = resource.getFilename();
// Strip the prefix (e.g. 01_) and the file extension (e.g. .csv)
String table = filename.substring(3, filename.length() - 4);
String columns = loadCsvHeader(location, inputStream);
// reset to the mark
inputStream.reset();
// Use Postgres COPY command to bring it in
long result = copyManager.copyIn(String.format(CSV_COPY_STRING, table, columns), inputStream);
log.info(format(" %s - Inserted %d rows", location, result));
}
private String loadCsvHeader(String location, InputStream inputStream) {
try {
return new BufferedReader(new InputStreamReader(inputStream)).readLine();
} catch (IOException e) {
throw new FlywayException("Failure to load seed data. Unable to load from location: " + location, e);
}
}
private Resource[] scanForResources() throws IOException {
return new ClassPathScanner(getClass().getClassLoader()).scanForResources(getSeedDataLocation(), "", ".csv");
}
protected String getSeedDataLocation() {
return getClass().getPackage().getName().replace('.', '/');
}
}
To use implement the class with the appropriate classpath
package db.devSeedData.dev;
public class v0_90__seed extends db.devSeedData.v0_90__seed {
}
All that is needed then is to have CSV files in your classpath under db/devSeedData that follow the format 01_tablename.csv. Columns are extracted from the header line of the CSV.

Related

Can Android Room validate an imported database before it is opened?

Problem: I can't seem to get an imported database to fail in Android Room until first queried? I'm wanting to try/catch and validate a database and I'm not succeeding on catching it before the first query. I always thought Android Room validated the database at the moment of instance creation and build against the schema, but apparently not. So the database fails upon first query.
What I'm trying to do: This app manages multiple databases that can be shared between users. So databases can be imported or exported. I suspect that at some point, someone will attempt to import the wrong database and or structure or do something to cause it to fail. I'm trying to catch the failure at the instance / build of the database or sooner.
What I've tried: I have a try/catch/finally block at the first instance creation, but it is only failing when first queried... then it notices that a table or column is missing. I'd like to catch it sooner if possible. I've looked at the RoomDatabase methods but nothing specifically applies to validation that I see other than just letting it break.
I always thought Android Room validated the database at the moment of instance creation and build against the schema, but apparently not.
The database validation is part of the open process, which does not happen until you actually try to access the database, as opposed to when getting the instance.
I can't seem to get an imported database to fail in Android Room until first queried?
When you get the instance you can force an open by getting (or trying to get) a SupportSQLiteDatabase by using either getWritableDatabase or getReadableDatabase via the instance's openHelper.
e.g.
(Kotlin)
db = TheDatabase.getInstance(this)
try {
val supportDB = db.openHelper.writableDatabase
}
catch(e: Exception) {
....
}
(Java)
db = TheDatabase.getInstance(this);
try {
SupportSQLiteDatabase supportDB = db.getOpenHelper().getWritableDatabase();
}
catch (Exception e) {
....
}
Alternative - Self validation
You could also do your own validation and thus avoid an exception (if the validation is simple enough). You could also make corrections thus allowing perhaps minor transgressions to be acceptable.
Before getting the actual instance, you could get a validation instance (different database name) that is created as per Room and then compare the schemas yourself.
here's an example designed to detect a table missing by creating the real database with a different table name (nottablex instead of tableX).
The TableX entity :-
#Entity
class TableX {
#PrimaryKey
Long id=null;
String name;
String other;
}
No Dao's as not needed for the example.
The TheDatabase with get instance methods, one for normal, the other for getting another (validation (empty model for schema comparison)) database but as an SQLiteDatabase.
#Database(entities = {TableX.class},version = 1)
abstract class TheDatabase extends RoomDatabase {
private static volatile TheDatabase instance = null;
private static volatile TheDatabase validationInstance = null;
static TheDatabase getInstance(Context context, String databaseName) {
if (instance == null ) {
instance = Room.databaseBuilder(context,TheDatabase.class,databaseName)
.allowMainThreadQueries()
.build();
}
return instance;
}
static SQLiteDatabase getValidationInstance(Context context, String databaseName) {
// Delete DB if it exists
if (context.getDatabasePath(databaseName).exists()) {
context.getDatabasePath(databaseName).delete();
}
// Create database and close it
TheDatabase db = Room.databaseBuilder(context,TheDatabase.class,databaseName)
.allowMainThreadQueries()
.build();
db.getOpenHelper().getWritableDatabase();
db.close();
return SQLiteDatabase.openDatabase(context.getDatabasePath(databaseName).getPath(),null,SQLiteDatabase.OPEN_READWRITE);
}
}
note that this forces the open to create the model/validation database (else the openDatabase would fail).
And finally for the demo MainActivity which creates an invalid database and then goes on to do a simple validation (just check that the expected tables exist). Only opening (never in the case of the example) the database if the tables expected by room exist.
public class MainActivity extends AppCompatActivity {
public static final String DATABASE_NAME = "the_database.db";
public static final String VALIDATION_DATABASE_NAME = DATABASE_NAME + "_validation";
public static final String TAG = "DBVALIDATION";
TheDatabase db;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
createIncorrectDatabase(this,DATABASE_NAME);
if (validateDatabase(VALIDATION_DATABASE_NAME,DATABASE_NAME) < 0) {
Log.d(TAG,"DATABASE " + DATABASE_NAME + " does mot match model.");
} else {
/* Database validated OK so use it */
db = TheDatabase.getInstance(this,DATABASE_NAME);
}
}
/* Purposefully create invalid database */
private void createIncorrectDatabase(Context context, String databaseName) {
File dbfile = context.getDatabasePath(databaseName);
if (!dbfile.exists()) {
dbfile.getParentFile().mkdirs();
}
SQLiteDatabase db = SQLiteDatabase.openOrCreateDatabase(context.getDatabasePath(databaseName),null);
db.execSQL("CREATE TABLE IF NOT EXISTS nottablex(id INTEGER PRIMARY KEY,name TEXT)");
db.close();
}
#SuppressLint("Range")
private long validateDatabase(String modelDatabase, String actualDatabase) {
String sqlite_master = "sqlite_master";
/* Want to skip room_master_table and sqlite tables susch as sqlite_sequence */
/* in this example only checking tables to show the basic technique */
String wherecluase = "name NOT LIKE 'sqlite_%' AND name NOT LIKE 'room_%' AND type = 'table'";
long rv = 0;
/* Get the model/validation database */
SQLiteDatabase modelDB = TheDatabase.getValidationInstance(this,modelDatabase);
/* Only need to check if the database exists as otherwise it will be created according to Room */
if (this.getDatabasePath(actualDatabase).exists()) {
/* Open as an SQLiteDatabase so no Room open to throw an exception */
SQLiteDatabase actualDB = SQLiteDatabase.openDatabase(this.getDatabasePath(actualDatabase).getAbsolutePath(),null,SQLiteDatabase.OPEN_READWRITE);
/* Get the tables expected from the model Room database */
Cursor modelTableNames = modelDB.query(sqlite_master,null,wherecluase,null,null,null,null);
Cursor actualTableNames = null; /* prepare Cursor */
/* Loop through the tables names in the model checking if they exist */
while (modelTableNames.moveToNext()) {
/* See if the expected table exists */
actualTableNames = actualDB.query(sqlite_master,null,"name=?",new String[]{modelTableNames.getString(modelTableNames.getColumnIndex("name"))},null,null,null);
if (!actualTableNames.moveToFirst()) {
Log.d(TAG,"Table " + modelTableNames.getString(modelTableNames.getColumnIndex("name")) + " not found.");
rv = rv -1; /* Table not found so decrement rv to indicate number not found */
}
}
/* Close the actualTableNames Cursor if it was used */
if (actualTableNames != null) {
actualTableNames.close();
}
/* Close the modelTableNames Cursor */
modelTableNames.close();
/* Close the actual database so Room can use it (comment out to show results in database Inspector)*/
actualDB.close();
} else {
Log.d(TAG,"Actual Database " + actualDatabase + " does not exist. No validation required as it would be created");
}
/* Close and delete the model database (comment out to show results in database Inspector)*/
modelDB.close();
this.getDatabasePath(modelDatabase).delete();
return rv;
}
}
Result
The log includes :-
D/DBVALIDATION: Table TableX not found.
D/DBVALIDATION: DATABASE the_database.db does mot match model.
Bypassing the close and model delete the databases for the above are:-
Note in this simple example Room would actually create the TableX table rather than fail with an exception.

Flink JDBC Sink part 2

I have posted a question few days back- Flink Jdbc sink
Now, I am trying to use the sink provided by flink.
I have written the code and it worked as well. But nothing got saved in DB and no exceptions were there. Using previous sink my code was not finishing(that should happen ideally as its a streaming app) but after the following code I am getting no error and the nothing is getting saved to DB.
public class CompetitorPipeline implements Pipeline {
private final StreamExecutionEnvironment streamEnv;
private final ParameterTool parameter;
private static final Logger LOG = LoggerFactory.getLogger(CompetitorPipeline.class);
public CompetitorPipeline(StreamExecutionEnvironment streamEnv, ParameterTool parameter) {
this.streamEnv = streamEnv;
this.parameter = parameter;
}
#Override
public KeyedStream<CompetitorConfig, String> start(ParameterTool parameter) throws Exception {
CompetitorConfigChanges competitorConfigChanges = new CompetitorConfigChanges();
KeyedStream<CompetitorConfig, String> competitorChangesStream = competitorConfigChanges.run(streamEnv, parameter);
//Add to JBDC Sink
competitorChangesStream.addSink(JdbcSink.sink(
"insert into competitor_config_universe(marketplace_id,merchant_id, competitor_name, comp_gl_product_group_desc," +
"category_code, competitor_type, namespace, qualifier, matching_type," +
"zip_region, zip_code, competitor_state, version_time, compConfigTombstoned, last_updated) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)",
(ps, t) -> {
ps.setInt(1, t.getMarketplaceId());
ps.setLong(2, t.getMerchantId());
ps.setString(3, t.getCompetitorName());
ps.setString(4, t.getCompGlProductGroupDesc());
ps.setString(5, t.getCategoryCode());
ps.setString(6, t.getCompetitorType());
ps.setString(7, t.getNamespace());
ps.setString(8, t.getQualifier());
ps.setString(9, t.getMatchingType());
ps.setString(10, t.getZipRegion());
ps.setString(11, t.getZipCode());
ps.setString(12, t.getCompetitorState());
ps.setTimestamp(13, Timestamp.valueOf(t.getVersionTime()));
ps.setBoolean(14, t.isCompConfigTombstoned());
ps.setTimestamp(15, new Timestamp(System.currentTimeMillis()));
System.out.println("sql"+ps);
},
new JdbcConnectionOptions.JdbcConnectionOptionsBuilder()
.withUrl("jdbc:mysql://127.0.0.1:3306/database")
.withDriverName("com.mysql.cj.jdbc.Driver")
.withUsername("xyz")
.withPassword("xyz#")
.build()));
return competitorChangesStream;
}
}
You need enable autocommit mode for jdbc Sink.
new JdbcConnectionOptions.JdbcConnectionOptionsBuilder()
.withUrl("jdbc:mysql://127.0.0.1:3306/database;autocommit=true")
It looks like SimpleBatchStatementExecutor only works in auto-commit mode. And if you need to commit and rollback batches, then you have to write your own ** JdbcBatchStatementExecutor **
Have you tried to include the JdbcExecutionOptions ?
dataStream.addSink(JdbcSink.sink(
sql_statement,
(statement, value) -> {
/* Prepared Statement */
},
JdbcExecutionOptions.builder()
.withBatchSize(5000)
.withBatchIntervalMs(200)
.withMaxRetries(2)
.build(),
new JdbcConnectionOptions.JdbcConnectionOptionsBuilder()
.withUrl("jdbc:mysql://127.0.0.1:3306/database")
.withDriverName("com.mysql.cj.jdbc.Driver")
.withUsername("xyz")
.withPassword("xyz#")
.build()));

How to reload list resource bundle in ADF 12c

I fail to reload my resource bundle class to reflect the changed translations (made my end-user) on page. Although getContent method executes and all translations as key/value fetched from database and object[][] returned from getContent method successfully. this happens after each time I clear the cache and refresh the jsf page through actionListener.
ResourceBundle.clearCache();
Also I tried to use the below and got the same result.
ResourceBundle.clearCache(Thread.currentThread().GetContextClassLoader());
Why WLS always see the old one? Am I miss something?
versions: 12.2.1.1.0 and 12.2.1.3.0
The end user - after making the translations and contributing to the internationalization of the project, the translations are saved to the database,
The process to inforce these operations are done through the following steps:
Create a HashMap and load all the resource key/vale pairs in the map
from the database:
while (rs.next()) {
bundle.put(rs.getString(1), rs.getString(2));
}
Refresh the Bundle of your application
SoftCache cache =
(SoftCache)getFieldFromClass(ResourceBundle.class,
"cacheList");
synchronized (cache) {
ArrayList myBundles = new ArrayList();
Iterator keyIter = cache.keySet().iterator();
while (keyIter.hasNext()) {
Object key = keyIter.next();
String name =
(String)getFieldFromObject(key, "searchName");
if (name.startsWith(bundleName)) {
myBundles.add(key);
sLog.info("Resourcebundle " + name +
" will be refreshed.");
}
}
cache.keySet().removeAll(myBundles);
Getthe a String from ResourceBoundle of your application:
for (String resourcebundle : bundleNames) {
String bundleName =
resourcebundle + (bundlePostfix == null ? "" : bundlePostfix);
try {
bundle = ResourceBundle.getBundle(bundleName, locale, getCurrentLoader(bundleName));
} catch (MissingResourceException e) {
// bundle with this name not found;
}
if (bundle == null)
continue;
try {
message = bundle.getString(key);
if (message != null)
break;
} catch (Exception e) {
}
}

Apache Flink JDBC InputFormat throwing java.net.SocketException: Socket closed

I am querying oracle database using Flink DataSet API. For this I have customised Flink JDBCInputFormat to return java.sql.Resultset. As I need to perform further operation on resultset using Flink operators.
public static void main(String[] args) throws Exception {
ExecutionEnvironment environment = ExecutionEnvironment.getExecutionEnvironment();
environment.setParallelism(1);
#SuppressWarnings("unchecked")
DataSource<ResultSet> source
= environment.createInput(JDBCInputFormat.buildJDBCInputFormat()
.setUsername("username")
.setPassword("password")
.setDrivername("driver_name")
.setDBUrl("jdbcUrl")
.setQuery("query")
.finish(),
new GenericTypeInfo<ResultSet>(ResultSet.class)
);
source.print();
environment.execute();
}
Following is the customised JDBCInputFormat:
public class JDBCInputFormat extends RichInputFormat<ResultSet, InputSplit> implements ResultTypeQueryable {
#Override
public void open(InputSplit inputSplit) throws IOException {
Class.forName(drivername);
dbConn = DriverManager.getConnection(dbURL, username, password);
statement = dbConn.prepareStatement(queryTemplate, resultSetType, resultSetConcurrency);
resultSet = statement.executeQuery();
}
#Override
public void close() throws IOException {
if(statement != null) {
statement.close();
}
if(resultSet != null)
resultSet.close();
if(dbConn != null) {
dbConn.close();
}
}
#Override
public boolean reachedEnd() throws IOException {
isLastRecord = resultSet.isLast();
return isLastRecord;
}
#Override
public ResultSet nextRecord(ResultSet row) throws IOException{
if(!isLastRecord){
resultSet.next();
}
return resultSet;
}
}
This works with below query having limit in the row fetched:
SELECT a,b,c from xyz where rownum <= 10;
but when I try to fetch all the rows having approx 1 million of data, I am getting the below exception after fetching random number of rows:
java.sql.SQLRecoverableException: Io exception: Socket closed
at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:101)
at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:199)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:263)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:521)
at oracle.jdbc.driver.T4CPreparedStatement.fetch(T4CPreparedStatement.java:1024)
at oracle.jdbc.driver.OracleResultSetImpl.close_or_fetch_from_next(OracleResultSetImpl.java:314)
at oracle.jdbc.driver.OracleResultSetImpl.next(OracleResultSetImpl.java:228)
at oracle.jdbc.driver.ScrollableResultSet.cacheRowAt(ScrollableResultSet.java:1839)
at oracle.jdbc.driver.ScrollableResultSet.isValidRow(ScrollableResultSet.java:1823)
at oracle.jdbc.driver.ScrollableResultSet.isLast(ScrollableResultSet.java:349)
at JDBCInputFormat.reachedEnd(JDBCInputFormat.java:98)
at org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:173)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Socket closed
at java.net.SocketOutputStream.socketWrite0(Native Method)
So for my case, how i can solve this issue?
I don't think it is possible to ship a ResultSet like a regular record. This is a stateful object that internally maintains a connection to the database server. Using a ResultSet as a record that is transferred between Flink operators means that it can be serialized, shipped over the via the network to another machine, deserialized, and handed to a different thread in a different JVM process. That does not work.
Depending on the connection a ResultSet might as well stay on the same machine in the same thread, which might be the case that worked for you. If you want to query a database from within an operator, you could implement the function as a RichMapPartitionFunction. Otherwise, I'd read the ResultSet in the data source and forward the resulting rows.

SQL Server CLR Integration enlisting in current transaction

I'm trying to use CLR integration in SQL Server to handle accessing external files instead of storing them internally as BLOBs. I'm trying to figure out the pattern I need to follow to make my code enlist in the current SQL transaction. I figured I would start with the simplest scenario, deleting an existing row, since the insert/update scenarios would be more complex.
[SqlProcedure]
public static void DeleteStoredImages(SqlInt64 DocumentID)
{
if (DocumentID.IsNull)
return;
using (var conn = new SqlConnection("context connection=true"))
{
conn.Open();
string FaceFileName, RearFileName;
int Offset, Length;
GetFileLocation(conn, DocumentID.Value, true,
out FaceFileName, out Offset, out Length);
GetFileLocation(conn, DocumentID.Value, false,
out RearFileName, out Offset, out Length);
new DeleteTransaction().Enlist(FaceFileName, RearFileName);
using (var comm = conn.CreateCommand())
{
comm.CommandText = "DELETE FROM ImagesStore WHERE DocumentID = " + DocumentID.Value;
comm.ExecuteNonQuery();
}
}
}
private class DeleteTransaction : IEnlistmentNotification
{
public string FaceFileName { get; set; }
public string RearFileName { get; set; }
public void Enlist(string FaceFileName, string RearFileName)
{
this.FaceFileName = FaceFileName;
this.RearFileName = RearFileName;
var trans = Transaction.Current;
if (trans == null)
Commit(null);
else
trans.EnlistVolatile(this, EnlistmentOptions.None);
}
public void Commit(Enlistment enlistment)
{
if (FaceFileName != null && File.Exists(FaceFileName))
{
File.Delete(FaceFileName);
}
if (RearFileName != null && File.Exists(RearFileName))
{
File.Delete(RearFileName);
}
}
public void InDoubt(Enlistment enlistment)
{
}
public void Prepare(PreparingEnlistment preparingEnlistment)
{
preparingEnlistment.Prepared();
}
public void Rollback(Enlistment enlistment)
{
}
}
When I actually try to run this, I get the following exception:
A .NET Framework error occurred during execution of user defined routine or aggregate 'DeleteStoredImages':
System.Transactions.TransactionException: The operation is not valid for the state of the transaction. ---> System.Transactions.TransactionPromotionException: MSDTC on server 'BD009' is unavailable. ---> System.Data.SqlClient.SqlException: MSDTC on server 'BD009' is unavailable.
System.Data.SqlClient.SqlException:
at System.Data.SqlServer.Internal.StandardEventSink.HandleErrors()
at System.Data.SqlServer.Internal.ClrLevelContext.SuperiorTransaction.Promote()
System.Transactions.TransactionPromotionException:
at System.Data.SqlServer.Internal.ClrLevelContext.SuperiorTransaction.Promote()
at System.Transactions.TransactionStatePSPEOperation.PSPEPromote(InternalTransaction tx)
at System.Transactions.TransactionStateDelegatedBase.EnterState(InternalTransaction tx)
System.Transactions.TransactionException:
at System.Transactions.TransactionState.EnlistVolatile(InternalTransaction tx, IEnlistmentNotification enlistmentNotification, EnlistmentOptions enlistmentOptions, Transaction atomicTransaction)
at System.Transactions.TransactionStateSubordinateActive.EnlistVolatile(InternalTransaction tx, IEnlistmentNotification enlistmentNotification, EnlistmentOptions enlistmentOptions, Transaction atomicTransaction)
at System.Transactions.Transaction.EnlistVolatile(IEnlistmentNotification enlistmentNotification, EnlistmentOptions enlistmentOptions)
at ExternalImages.StoredProcedures.DeleteTransaction.Enlist(String FaceFileName, String RearFileName)
at ExternalImages.StoredProcedures.DeleteStoredImages(SqlInt64 DocumentID)
. User transaction, if any, will be rolled back.
The statement has been terminated.
Can anyone explain what I'm doing wrong, or point me to an example of how to do it right?
You have hopefully solved this by now, but in case anyone else has a similar problem: the error message you are getting suggests that you need to start the Distributed Transaction Coordinator service on the BD009 machine (presumably your own machine).
#Aasmund's answer regarding the Distributed Transaction Coordinator might solve the stated problem, but that still leaves you in a non-ideal state: You are tying a transaction, which locks the ImagesStore table (even if it is just a RowLock), to two file system operations? And you need to BEGIN and COMMIT the transaction outside of this function (since that isn't being handled in the presented code).
I would separate those two pieces:
Step 1: Delete the row from the table
and then, IF that did not error,
Step 2: Delete the file(s)
In the scenario where Step 1 succeeds but then Step 2, for whatever reason, fails, do one or both of the following:
return an error status code and keep track of which DocumentIDs got an error when attempting to delete the file in a status table. You can use that to manually delete the files and/or debug why the error occurred.
create a process that can run periodically to find and remove unreferenced files.

Resources