Just learning to use the redmine test environment.
When I do this:
rake db:drop db:create db:migrate redmine:plugins:migrate redmine:load_default_data RAILS_ENV=test
I get a failure:
-- drop_table(:open_id_authentication_nonces)
-> 0.0114s
== 20211213122101 DropOpenIdAuthenticationTables: migrated (0.0194s) ==========
== 20211213122102 RemoveOpenIdSetting: migrating ==============================
== 20211213122102 RemoveOpenIdSetting: migrated (0.0012s) =====================
== 20220224194639 DeleteOrphanedTimeEntryActivities: migrating ================
== 20220224194639 DeleteOrphanedTimeEntryActivities: migrated (0.0088s) =======
Select language: ar, az, bg, bs, ca, cs, da, de, el, en, en-GB, es, es-PA, et, eu, fa, fi, fr, gl, he, hr, hu, id, it, ja, ko, lt, lv, mk, mn, nl, no, pl, pt, pt-BR, ro, ru, sk, sl, sq, sr, sr-YU, sv, th, tr, uk, vi, zh, zh-TW [en]
====================================
Error: unknown attribute 'issues_visibility' for Role.
I have removed the two plugins I had installed; neither of them had db migrations.
Versions:
ruby: 3.0.4p208
redmine: 5.0.1.stable
rails: 6.1.6
The issues_visibility column is added to the roles table in a migration. However, as the Role model class probably still has cached a previous database schema, the assumed schema of the Role model might differ from the actual database schema as updated by the migrations.
To fix this, you may want to perform the redmine:load_default_data task in a separate rake invocation to make sure that the default data loaded uses the most up-to-date database schema after the migrations.
rake db:drop db:create db:migrate redmine:plugins:migrate RAILS_ENV=test
rake redmine:load_default_data RAILS_ENV=test
Related
So, I've went through many of the options to get this set up and none worked with my SQL Server setup except using sql_exporter. There is a successful connection where I can read all the built-in metrics but when I tried my own query on a specific database and its table there is always something wrong with my query such as "Invalid Object" when trying to reach the database. There have been many resources I have attempted to use but would mostly like a custom metric like: https://sysdig.com/blog/monitor-sql-server-prometheus/.
sql_exporter.yml:
# The target to monitor and the collectors to execute on it.
target:
# Data source name always has a URI schema that matches the driver name. In some cases (e.g. MySQL)
# the schema gets dropped or replaced to match the driver expected DSN format.
data_source_name: 'sqlserver://username:password#localhost:1433'
# Collectors (referenced by name) to execute on the target.
collectors: [mssql_standard]
# Collector files specifies a list of globs. One collector definition is read from each matching file.
collector_files:
- "*.collector.yml"
prometheus.yml:
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
- job_name: 'sql_server'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:9966']
When I tried the custom metric in the post I linked, sql_exporter crashes instantly no errors. My database is being found in the standard metrics of https://github.com/free/sql_exporter but I am unsure the syntax to execute a simple SELECT db_value FROM db_table. I understand there are ways out there and I've tried so will need assistance. Thank you in advance!
Stream Analytics job ( iot hub to CosmosDB output) "Start" command is failing with the following error.
[12:49:30 PM] Source 'cosmosiot' had 1 occurrences of kind
'OutputDataConversionError.RequiredColumnMissing' between processing
times '2019-04-17T02:49:30.2736530Z' and
'2019-04-17T02:49:30.2736530Z'.
I followed the instructions and not sure what is causing this error.
Any suggestions please? Here is the CosmosDB Query:
SELECT
[bearings temperature],
[windings temperature],
[tower sway],
[position sensor],
[blade strain gauge],
[main shaft strain gauge],
[shroud accelerometer],
[gearbox fluid levels],
[power generation],
[EventProcessedUtcTime],
[EventEnqueuedUtcTime],
[IoTHub].[CorrelationId],
[IoTHub].[ConnectionDeviceId]
INTO
cosmosiot
FROM
TurbineData
If you're specifying fields in your query (ie Select Name, ModelNumber ...) rather than just using Select * ... the field names are converted to lowercase by default when using Compatibility Level 1.0, which throws off Cosmos DB. In the portal if you open your Stream Analytics job and go to 'Compatibility level' under the 'Configure' section and select v1.1 or higher that should fix the issue. You can read more about the compatibility levels in the Stream Analytics documentation here: https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-compatibility-level
Most important
DB Unit returns a difference for a double value in row 78:
Exception in thread "main" junit.framework.ComparisonFailure: value (table=dataset, row=78, col=DirtyValue) expected:<4901232.27291950[7]> but was:<4901232.27291950[6]>
So I assume that SQL Server returns 4901232.272919507 while HANA returns 4901232.272919506
(Based on the answer to JUnit assertEquals Changes String)
Then I tried to set the tolerated delta acording to the FAQ Is there an equivalent to JUnit's assertEquals(double expected, double actual, double delta) to define a tolerance level when comparing numeric values?
But I do still get the same error - any ideas?
Additional information
Maybe this is the reason:?
[main] WARN org.dbunit.dataset.AbstractTableMetaData - Potential problem found: The configured data type factory 'class org.dbunit.dataset.datatype.DefaultDataTypeFactory' might cause problems with the current database 'Microsoft SQL Server' (e.g. some datatypes may not be supported properly). In rare cases you might see this message because the list of supported database products is incomplete (list=[derby]). If so please request a java-class update via the forums.If you are using your own IDataTypeFactory extending DefaultDataTypeFactory, ensure that you override getValidDbProducts() to specify the supported database products.
[main] WARN org.dbunit.dataset.AbstractTableMetaData - Potential problem found: The configured data type factory 'class org.dbunit.dataset.datatype.DefaultDataTypeFactory' might cause problems with the current database 'HDB' (e.g. some datatypes may not be supported properly). In rare cases you might see this message because the list of supported database products is incomplete (list=[derby]). If so please request a java-class update via the forums.If you are using your own IDataTypeFactory extending DefaultDataTypeFactory, ensure that you override getValidDbProducts() to specify the supported database products.
DbUnit Version 2.5.4
DirtyValue is calculated from 3 double vales in both systems
SQL Server
SELECT TypeOfGroup, Segment, Portfolio, UniqueID, JobId, DirtyValue, PosUnits, FX_RATE, THEO_Value
FROM DATASET_PL
order by JobId, TypeOfGroup, Segment, Portfolio, UniqueID COLLATE Latin1_General_bin
HANA
SELECT "TypeOfGroup", "Segment", "Portfolio", "UniqueID", "JobId", "DirtyValue", Pos_Units as "PosUnits", FX_RATE, THEO_Value as "THEO_Value"
FROM "_SYS_BIC"."meag.app.h4q.metadata.dataset.pnl/06_COMPARE_CUBES_AND_CALC_ATTR"
order by "JobId", "TypeOfGroup", "Segment", "Portfolio", "UniqueID"
Work-around
Use a diffhandler and handle the differences there:
DiffCollectingFailureHandler diffHandler = new DiffCollectingFailureHandler();
Assertion.assertEquals(expectedTable, actualTable);
List<Difference> diffList = diffHandler.getDiffList();
for (Difference diff: diffList) {
if (diff.getColumnName().equals("DirtyValue")) {
double actual = (double) diff.getActualValue();
double expected = (double) diff.getExpectedValue();
if (Math.abs(Math.abs(actual) - Math.abs(expected)) > 0.00001) {
logDiff(diff);
} else {
logDebugDiff(diff);
}
} else {
logDiff(diff);
}
}
private void logDiff(Difference diff) {
logger.error(String.format("Diff found in row:%s, col:%s expected:%s, actual:%s", diff.getRowIndex(), diff.getColumnName(), diff.getExpectedValue(), diff.getActualValue()));
}
private void logDebugDiff(Difference diff) {
logger.debug(String.format("Diff found in row:%s, col:%s expected:%s, actual:%s", diff.getRowIndex(), diff.getColumnName(), diff.getExpectedValue(), diff.getActualValue()));
}
The question was "Any idea?", so maybe it helps to understand why the difference occurrs.
HANA truncates if needed, see "HANA SQL and System Views Reference", numeric types. In HANA the following Statement results in 123.45:
select cast( '123.456' as decimal(6,2)) from dummy;
SQL-Server rounds if needed, at least if the target data type is numeric, see e.g. here at "Truncating and rounding results".
The same SQL statement as above results in 123.46 in SQL-Server.
And SQL-Standard seems to leave it open, whether to round or to truncate, see answer on SO .
I am not aware of any settings that change the rounding behavior in HANA, but maybe there is.
I am getting the following error while configuring RM server: Object Already exists in Windows Server 2008 R2 enterprise. Please find the log.
I, 2014/10/29, 08:18:40.108, Variable : Key = DefaultLogin, Value = GAP-RELEASE\BuildUser
I, 2014/10/29, 08:18:40.124, Variable : Key = DefaultAdmin, Value = GAP-RELEASE\BuildUser
I, 2014/10/29, 08:18:40.124, Variable : Key = DatabaseName, Value = ReleaseManagement
I, 2014/10/29, 08:18:40.124, Variable : Key = DefaultLocalService, Value = NT AUTHORITY\LOCAL SERVICE
I, 2014/10/29, 08:18:53.384, Database ReleaseManagement, version 12.0.30723.0 was installed successfully.
I, 2014/10/29, 08:18:53.399, Created Release Management database.
E, 2014/10/29, 08:18:53.462, Received Exception : System.Security.Cryptography.CryptographicException: Object already exists.
at System.Security.Cryptography.CryptographicException.ThrowCryptographicException(Int32 hr)
at System.Security.Cryptography.Utils._CreateCSP(CspParameters param, Boolean randomKeyContainer, SafeProvHandle& hProv)
at System.Security.Cryptography.Utils.CreateProvHandle(CspParameters parameters, Boolean randomKeyContainer)
at System.Security.Cryptography.Utils.GetKeyPairHelper(CspAlgorithmType keyType, CspParameters parameters, Boolean randomKeyContainer, Int32 dwKeySize, SafeProvHandle& safeProvHandle, SafeKeyHandle& safeKeyHandle)
at System.Security.Cryptography.RSACryptoServiceProvider.GetKeyPair()
at Microsoft.TeamFoundation.Release.CommonConfiguration.Helpers.CryptoHelper.ConfigureServerCryptoKey(String serverName, String databaseName)
at Microsoft.TeamFoundation.Release.Configuration.ConfigurationManager.Configure(ConfigurationUpdatePack updatePack, DelegateStatusUpdate statusListener)
at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument)
I, 2014/10/29, 08:18:53.462, Work completed for GetConfiguration() call : got out of turn error
E, 2014/10/29, 08:18:53.462, Object already exists.
I had the samen issue. A lot people forward to this solution: http://blogs.objectsharp.com/post/2014/11/04/%E2%80%9CObject-already-exists%E2%80%9D-error-during-Release-Management-server-configuration.aspx
That didn't work me and I finally let the infra team restore the machine. That fixed the problem for me.
In my case the database server was on another machine and I kept all my release paths and templates.
The only thing I had to reconfigure was the IIS path: http://localhost:1000/releasemanagement
Make your release management service account a local administrator on the Release Management server.
Update: I recently ran into this issue at a client. The service account was a local admin, but the account didn't have the necessary permissions to the Machine Keys folder (C:\Users\All Users\Microsoft\Crypto\RSA\MachineKeys). For some reason, I couldn't apply the permissions -- I received an Access Denied message, even with a fully privileged account.
I ended up solving it as follows:
Take ownership of the MachineKeys folder with the service user
Give full read/write permission to the service user
Reset ownership of the folder to the SYSTEM account
In SQL Server 2012 Data Quality Services, I need to clean the data in Term Based Relation as follows:
String Replaceto**
Wal walmart**
Wlr walmart**
Wlt walmart**
Walmart
That is the words "wal","wlr", and "wlt" have to be replaced with "walmart" and finally "walmart" is replaced with a empty space.
it shows the error as
SQL Server Data Quality Services
--------------------------------------------------------------------------------
2/1/2013 2:48:37 PM
Message Id: DataValueServiceTermBasedRelationCorrectedValueAlreadyCorrectingValue
Term Based Relation (walmart, ) cannot be added for domain 'keywordphrase' because 'walmart' value already exists as a correcting value.
--------------------------------------------------------------------------------
Microsoft.Ssdqs.DataValueService.Service.DataValueServiceException: Term Based Relation (walmart, ) cannot be added for domain 'keywordphrase' because 'walmart' value already exists as a correcting value.
at Microsoft.Ssdqs.DataValueService.Managers.DomainTermBasedRelationManager.PreapareAndValidateRelation(DomainTermBasedRelation relation, IMasterContext context)
at Microsoft.Ssdqs.DataValueService.Managers.DomainTermBasedRelationManager.Add(IMasterContext context, ServiceDefinitionBase data)
at Microsoft.Ssdqs.DataValueService.Service.DataValueServiceConcrete.Add(IMasterContext context, ReadOnlyCollection`1 data)
any suggestions for the solution
Thanks,
It is my understanding that DQS does not support multi-level replacements (i.e, a->b then b->c). Why not go straight to blanks for the firts three terms?