I have 136 wav files named numerically (1.wav to 136. wav), plus a wav file that contains 2 seconds of silence. I would like to combine them as follows, generating several other files:
1 + silence + 2 + silence (copy) + 3
1 + silence + 2 + silence (copy) + 4
2 + silence + 1 + silence (copy) + 3
2 + silence + 1 + silence (copy) + 4
5 + silence + 6 + silence + (copy) + 7
5 + silence + 6 + silence + (copy) + 8
6 + silence + 5 + silence + (copy) + 7
6 + silence + 5 + silence + (copy) + 8
I have the script below, which combines the files as follows:
1 + silence + 2 + silence + (copy) + 1 (copy)
1 + silence + 2 + silence + (copy) + 2 (copy)
2 + silence + 1 + silence + (copy) + 1 (copy)
2 + silence + 1 + silence + (copy) + 1 (copy)
3 + silence + 4 + silence + (copy) + 2 (copy)
3 + silence + 4 + silence + (copy) + 3 (copy)
4 + silence + 3 + silence + (copy) + 3 (copy)
4 + silence + 3 + silence + (copy) + 4 (copy)
str = Create Strings as file list... soundlist 'directory$'/*.wav
num_file = Get number of strings
#writeInfoLine: num_file
#Copy... silence2
for i to num_file
selectObject: str
fileName$ = Get string: i
#writeInfoLine: fileName$
n_fn$ = fileName$
num_c = number(n_fn$ - ".wav")
n_sound_file$ = string$(num_c+1) + ".wav"
if num_c mod 2 == 1
cp_f1$ = fileName$ - ".wav" + "cp"
cp_f2$ = n_sound_file$ - ".wav" + "cp"
cp_sil1$ = "sil1"
f_file$ = "Sound " + fileName$ - ".wav"
s_file$ = "Sound " + n_sound_file$ - ".wav"
f_file2$ = "Sound " + cp_f1$
s_file2$ = "Sound " + cp_f2$
sil_file$ = "Sound " + "silence"
sil_file2$ = "Sound " + "sil1"
writeInfoLine: cp_f1$
new_f1$ = "Sound " + f_file$ + "_sil_" + s_file$ + "_sil_" + f_file$+".wav"
new_f2$ = "Sound " + f_file$ + "_sil_" + s_file$ + "_sil_" + s_file$ + ".wav"
new_f3$ = "Sound " + s_file$ + "_sil_" + f_file$ + "_sil_" + f_file$ + ".wav"
new_f4$ = "Sound " + s_file$ + "_sil_" + f_file$ + "_sil_" + s_file$ + ".wav"
#########
Read from file: directory$+ "/" + fileName$
Read from file: directory$+ "/" + "silence.wav"
Read from file: directory$+ "/" + n_sound_file$
selectObject: sil_file$
Copy... sil1
selectObject: f_file$
Copy... 'cp_f1$'
selectObject: f_file$
plusObject: sil_file$
plusObject: s_file$
plusObject: sil_file2$
plusObject: f_file2$
Concatenate
Save as WAV file... 'directory$'/'new_f1$'
removeObject: f_file$, sil_file$,sil_file2$,f_file2$
#########
Read from file: directory$+ "/" + fileName$
Read from file: directory$+ "/" + "silence.wav"
Read from file: directory$+ "/" + n_sound_file$
selectObject: sil_file$
Copy... sil1
selectObject: s_file$
Copy... 'cp_f2$'
selectObject: f_file$
plusObject: sil_file$
plusObject: s_file$
plusObject: sil_file2$
plusObject: s_file2$
Concatenate
Save as WAV file... 'directory$'/'new_f2$'
removeObject: f_file$, sil_file$,sil_file2$,s_file2$
#####
Read from file: directory$+ "/" + n_sound_file$
Read from file: directory$+ "/" + "silence.wav"
Read from file: directory$+ "/" + fileName$
selectObject: sil_file$
Copy... sil1
selectObject: s_file$
Copy... 'cp_f2$'
selectObject: s_file$
plusObject: sil_file$
plusObject: f_file$
plusObject: sil_file2$
plusObject: s_file2$
Concatenate
Save as WAV file... 'directory$'/'new_f3$'
removeObject: s_file$, sil_file$,sil_file2$,s_file2$
####
Read from file: directory$+ "/" + n_sound_file$
Read from file: directory$+ "/" + "silence.wav"
Read from file: directory$+ "/" + fileName$
selectObject: sil_file$
Copy... sil1
selectObject: f_file$
Copy... 'cp_f1$'
selectObject: s_file$
plusObject: sil_file$
plusObject: f_file$
plusObject: sil_file2$
plusObject: f_file2$
Concatenate
Save as WAV file... 'directory$'/'new_f4$'
endif
endfor
select all
Remove
The script that I have so far divides the files into file named as an odd number and an even number. For my current purpose, the algorithm would be more complex. I would be grateful if I could get some help!
[Disclaimer: I am the author of the mentioned Parselmouth library]
I am not an expert on Praat scripting, but if you are willing to use Python (and the Parselmouth library to access Praat functionality from Python), the following Python code seems to work for me:
import glob
import parselmouth
directory = "/the/directory/of/your/choice"
output_directory = "/the/directory/of/your/choice/output"
all_files = glob.glob(directory + "/*.wav")
n_files = len(all_files) - 1 # Number of files minus 'silence.wav'
def concatenate_and_save(first, second, third):
silence = parselmouth.Sound(directory + "/silence.wav")
sound1 = parselmouth.Sound(directory + "/" + str(first) + ".wav")
sound2 = parselmouth.Sound(directory + "/" + str(second) + ".wav")
sound3 = parselmouth.Sound(directory + "/" + str(third) + ".wav")
concatenated = parselmouth.Sound.concatenate([sound1, silence, sound2, silence, sound3])
concatenated.save(output_directory + "/" + str(first) + "_sil_" + str(second) + "_sil_" + str(third) + ".wav", "WAV")
for i in range(1, n_files+1, 4):
concatenate_and_save(i, i+1, i+2)
concatenate_and_save(i, i+1, i+3)
concatenate_and_save(i+1, i, i+2)
concatenate_and_save(i+1, i, i+3)
Apologies if you needed the code as Praat script, but I do expect it will be possible and straightforward to replicate the idea of this algorithm (i.e., going in steps of 4 through the number of files and then using the right combinations of i, i+1, i+2, and i+3, and maybe use a function to not repeat yourself) in a Praat script?
Related
I indeed got a relatively big dataset and my mixed effects logistic regression is like below. Is that normal to take that long to run? or I made some mistakes?
library(lme4)
glmer_EBRD_undersample_1 <- glmer(leave_happened ~
performance_rating_2016 + performance_rating_2017 + performance_rating_2018 + performance_rating_2019 + performance_rating_2020
+ gender
+ target_group
+ target_pmf_band
+ target_hq_or_ro
+ target_office_location_country_distilled
+ target_org_unit_cost_centre_code_distilled
+ target_ebrd_region_distilled
+ target_contract_group_distilled
+ target_position_tenure_group
+ target_length_of_service_group_distilled
+ leaves_to_date
+ moves_to_date
+ joins_to_date
+ applied_count_to_date
+ line_reviewed_to_date
+ interviewed_to_date
+ offered_to_date
+ hired_to_date
+ (1 | person_id)
,
data = train_undersample_1,
family = binomial,
control = glmerControl(optimizer = "bobyqa"),
nAGQ = 10
)
summary(glmer_EBRD_undersample_1)
Also gave a warning like this: Warning in commonArgs(par, fn, control, environment()) :
maxfun < 10 * length(par)^2 is not recommended.
I am trying to get a date for mat like "2018-05-17T08:09:02", but when I tried below code I get "2018-05-17T8:9:2"
can some one help to get "2018-05-17T08:09:02" , this format
let d = new Date();
console.log("date>> "+d.getFullYear() + "-" + ((d.getMonth() + 1) < 10 ? '0' : '') +
(d.getMonth() + 1) + "-" + d.getDate() + "T" +( d.getHours() )+ ":"+ d.getMinutes() + ":"+ d.getSeconds());
According to How to format numbers by prepending 0 to single-digit numbers?
Your desirable answer is
let d = new Date();
console.log("date>> "+d.getFullYear() + "-" + ((d.getMonth() + 1) < 10 ? '0' : '') +
(d.getMonth() + 1) + "-" + d.getDate() + "T" +("0" + d.getHours()).slice(-2)+ ":"+ ("0" + d.getMinutes()).slice(-2) + ":"+ ("0" + d.getSeconds()).slice(-2));
But as #Aleksey Solovey already mentioned in the above, I also recommand to use d.toISOString().slice(0,-5).
My aliases keep getting renamed to Expr1, Expr2, etc. after saving my view. I have multiple CASE WHEN THEN statements in the view that have the same aliases in each case statement (X0,Y0 and Z0.) Essentially all case statements after the first case statement replaces my aliases with this Expr1, Expr2.
Any help would be extremely appreciated. I also tried putting square brackets around the aliases but that did not work.
I should also note that when I go to save, I get a warning from SQL Server Management Studio about my order by clause. It's probably unrelated but you never know.
SELECT
TOP (100) PERCENT
s.dtmEvaluation AS SurgeryDate,
p.idPatient,
d.strLead AS LeadType,
ds.strDBSSite AS Target,
CASE
WHEN intSide = 0 THEN 'L'
ELSE 'R'
END AS Side,
ROUND(s.dblACX, 2, 1) AS ACX,
ROUND(s.dblACZ, 2, 1) AS ACY,
ROUND(s.dblACZ, 2, 1) AS ACZ,
ROUND(s.dblPCX, 2, 1) AS PCX,
ROUND(s.dblPCY, 2, 1) AS PCY,
ROUND(s.dblPCZ, 2, 1) AS PCZ,
ROUND(s.dblInitX, 2, 1) AS InitialX,
ROUND(s.dblInitY, 2, 1) AS [Initial Y],
ROUND(s.dblInitZ, 2, 1) AS [Initial Z],
s.dblACPCAngle AS InitialACPCAngle,
s.dblCenterAngle AS InitialCLangle,
s.dblMicroPasses AS MicroPasses,
s.dblMacroPasses AS MacroPasses,
ROUND(s.dblFinalX, 2, 1) AS FinalX,
ROUND(s.dblFinalY, 2, 1) AS FinalY,
ROUND(s.dblFinalZ, 2, 1) AS FinalZ,
ROUND(s.dblMeasuredX, 2, 1) AS MeasX,
ROUND(s.dblMeasuredY, 2, 1) AS MeasY,
ROUND(s.dblMeasuredZ, 2, 1) AS MeasZ,
s.dblMeasuredACPCAngle,
s.dblMeasuredCtrAngle,
ROUND(SQRT(POWER(s.dblMeasuredX - s.dblFinalX, 2) + POWER(s.dblMeasuredY - s.dblFinalY, 2) + POWER(s.dblMeasuredZ - s.dblFinalZ,2)), 2, 1) AS Delta,
CASE
WHEN s.intSide = 1 THEN s.dblMeasuredx + ((dblelectrodelength - 1) + 0 * (dblelectrodespacing + dblelectrodelength) + (dblelectrodelength / 2)) * SIN(RADIANS(dblMeasuredCtrAngle))
ELSE dblMeasuredx - ((dblelectrodelength - 1) + 0 * (dblelectrodespacing + dblelectrodelength) + (dblelectrodelength / 2)) * SIN(RADIANS(dblMeasuredCtrAngle))
END AS X0,
s.dblMeasuredY +
(((d.dblElectrodeLength - 1)
+ 0 * (d.dblElectrodeSpacing + d.dblElectrodeLength)) +
d.dblElectrodeLength / 2) * COS(RADIANS(s.dblMeasuredACPCAngle))
* COS(RADIANS(s.dblMeasuredCtrAngle)) AS Y0,
s.dblMeasuredZ + (((d.dblElectrodeLength - 1) + 0 *
(d.dblElectrodeSpacing + d.dblElectrodeLength))
+ d.dblElectrodeLength / 2) *
SIN(RADIANS(s.dblMeasuredACPCAngle)) *
COS(RADIANS(s.dblMeasuredCtrAngle)) AS Z0,
CASE
WHEN intSide = 1 THEN dblMeasuredx +
((dblelectrodelength - 1) + 1 * (dblelectrodespacing +
dblelectrodelength) + (dblelectrodelength / 2))
* SIN(RADIANS(dblMeasuredCtrAngle))
ELSE dblMeasuredx -
((dblelectrodelength - 1) + 1 * (dblelectrodespacing +
dblelectrodelength) + (dblelectrodelength / 2))
* SIN(RADIANS(dblMeasuredCtrAngle))
END AS Expr1,
s.dblMeasuredY + (((d.dblElectrodeLength - 1) + 1 *
(d.dblElectrodeSpacing + d.dblElectrodeLength))
+ d.dblElectrodeLength / 2) *
COS(RADIANS(s.dblMeasuredACPCAngle)) *
COS(RADIANS(s.dblMeasuredCtrAngle)) AS Expr2,
s.dblMeasuredZ + (((d.dblElectrodeLength - 1) + 1 *
(d.dblElectrodeSpacing + d.dblElectrodeLength)) + d.dblElectrodeLength /
2)
* SIN(RADIANS(s.dblMeasuredACPCAngle)) *
COS(RADIANS(s.dblMeasuredCtrAngle)) AS Expr3,
CASE
WHEN intSide = 1 THEN dblMeasuredx + ((dblelectrodelength - 1)
+ 2 * (dblelectrodespacing + dblelectrodelength) +
(dblelectrodelength / 2)) * SIN(RADIANS(dblMeasuredCtrAngle))
ELSE dblMeasuredx - ((dblelectrodelength - 1)
+ 2 * (dblelectrodespacing + dblelectrodelength) +
(dblelectrodelength / 2)) * SIN(RADIANS(dblMeasuredCtrAngle))
END AS
Expr4,
s.dblMeasuredY + (((d.dblElectrodeLength - 1) + 2 *
(d.dblElectrodeSpacing + d.dblElectrodeLength)) + d.dblElectrodeLength
/ 2)
* COS(RADIANS(s.dblMeasuredACPCAngle)) *
COS(RADIANS(s.dblMeasuredCtrAngle)) AS Expr5,
s.dblMeasuredZ +
(((d.dblElectrodeLength - 1)
+ 2 * (d.dblElectrodeSpacing + d.dblElectrodeLength)) +
d.dblElectrodeLength / 2) * SIN(RADIANS(s.dblMeasuredACPCAngle)) *
COS(RADIANS(s.dblMeasuredCtrAngle))
AS Expr6,
CASE
WHEN intSide = 1 THEN dblMeasuredx +
((dblelectrodelength - 1) + 3 * (dblelectrodespacing +
dblelectrodelength) + (dblelectrodelength / 2))
* SIN(RADIANS(dblMeasuredCtrAngle))
ELSE dblMeasuredx -
((dblelectrodelength - 1) + 3 * (dblelectrodespacing +
dblelectrodelength) + (dblelectrodelength / 2))
* SIN(RADIANS(dblMeasuredCtrAngle))
END AS Expr7,
s.dblMeasuredY + (((d.dblElectrodeLength - 1) + 3 *
(d.dblElectrodeSpacing + d.dblElectrodeLength))
+ d.dblElectrodeLength / 2) *
COS(RADIANS(s.dblMeasuredACPCAngle)) *
COS(RADIANS(s.dblMeasuredCtrAngle)) AS Expr8,
s.dblMeasuredZ + (((d.dblElectrodeLength - 1) + 3 *
(d.dblElectrodeSpacing + d.dblElectrodeLength)) + d.dblElectrodeLength /
2)
* SIN(RADIANS(s.dblMeasuredACPCAngle)) *
COS(RADIANS(s.dblMeasuredCtrAngle)) AS Expr9,
s.dblAtlasScaleX,
s.dblAtlasScaleY,
s.dblAtlasScaleZ,
s.dblAtlasMovementX,
s.dblAtlasMovementY,
s.dblAtlasMovementZ,
s.dblAtlasRotationX,
s.dblAtlasRotationY,
s.dblAtlasRotationZ
FROM dbo.tblDBSSurgery AS s
INNER JOIN dbo.tblPatientDemographics AS p
ON p.idPatient =
s.idPatient
LEFT OUTER JOIN dbo.tblLookupLeads AS d
ON d.idLead = s.intLeadType
LEFT OUTER JOIN dbo.tblLookupDBSSites AS ds
ON ds.idDBSSite = s.intSite
WHERE (s.intProcedure = 0
OR s.intProcedure = 2)
AND (s.blnOutside = 0)
AND (NOT
(p.strMRN = '09999999'))
AND (NOT (p.strMRN = '08888888'))
ORDER BY surgerydate
I would like to take mean of a window (5x5).
And this window beginning will move +1 at x axis. After first x axis finished, window will move +1 at y axis and will start a new loop. And I have to say that bant4.tif is an image 512x512.
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
nir = mpimg.imread('bant4.tif')
A = np.array(nir)
for k in range(0,508):
for j in range(0,508):
ilk = A[k][j] + A[k][j+1] + A[k][j+2] + A[k][j+3] + A[k][j+4]
ikinci = A[k+1][j] + A[k+1][j+1] + A[k+1][j+2] + A[k+1][j+3] + A[k+1][j+4]
ucuncu = A[k+2][j] + A[k+2][j+1] + A[k+2][j+2] + A[k+2][j+3] + A[k+2][j+4]
dorduncu = A[k+3][j] + A[k+3][j+1] + A[k+3][j+2] + A[k+3][j+3] + A[k+3][j+4]
besinci = A[k+4][j] + A[k+4][j+1] + A[k+4][j+2] + A[k+4][j+3] + A[k+4][j+4]
ort = (ilk + ikinci + ucuncu + dorduncu + besinci) / 5
ort.tofile('ort.txt')
Here is the error.
Warning (from warnings module):
File "C:\Users\Celik\Desktop\ödev5.py", line 19
ort = (ilk + ikinci + ucuncu + dorduncu + besinci) / 5
RuntimeWarning: overflow encountered in ubyte_scalars
Warning (from warnings module):
File "C:\Users\Celik\Desktop\ödev5.py", line 17
besinci = A[k+4][j] + A[k+4][j+1] + A[k+4][j+2] + A[k+4][j+3] + A[k+4][j+4]
RuntimeWarning: overflow encountered in ubyte_scalars
Warning (from warnings module):
File "C:\Users\Celik\Desktop\ödev5.py", line 14
ikinci = A[k+1][j] + A[k+1][j+1] + A[k+1][j+2] + A[k+1][j+3] + A[k+1][j+4]
RuntimeWarning: overflow encountered in ubyte_scalars
Warning (from warnings module):
File "C:\Users\Celik\Desktop\ödev5.py", line 15
ucuncu = A[k+2][j] + A[k+2][j+1] + A[k+2][j+2] + A[k+2][j+3] + A[k+2][j+4]
RuntimeWarning: overflow encountered in ubyte_scalars
Warning (from warnings module):
File "C:\Users\Celik\Desktop\ödev5.py", line 13
ilk = A[k][j] + A[k][j+1] + A[k][j+2] + A[k][j+3] + A[k][j+4]
RuntimeWarning: overflow encountered in ubyte_scalars
Warning (from warnings module):
File "C:\Users\Celik\Desktop\ödev5.py", line 16
dorduncu = A[k+3][j] + A[k+3][j+1] + A[k+3][j+2] + A[k+3][j+3] + A[k+3][j+4]
RuntimeWarning: overflow encountered in ubyte_scalars
Here is the indices code:
`
g = TitanFactory.build().set("storage.backend", "cassandra")
.set("storage.hostname", "127.0.0.1").open();
TitanManagement mgmt = g.getManagementSystem();
PropertyKey db_local_name = mgmt.makePropertyKey("db_local_name")
.dataType(String.class).make();
mgmt.buildIndex("byDb_local_name", Vertex.class).addKey(db_local_name)
.buildCompositeIndex();
PropertyKey db_schema = mgmt.makePropertyKey("db_schema")
.dataType(String.class).make();
mgmt.buildIndex("byDb_schema", Vertex.class).addKey(db_schema)
.buildCompositeIndex();
PropertyKey db_column = mgmt.makePropertyKey("db_column")
.dataType(String.class).make();
mgmt.buildIndex("byDb_column", Vertex.class).addKey(db_column)
.buildCompositeIndex();
PropertyKey type = mgmt.makePropertyKey("type").dataType(String.class)
.make();
mgmt.buildIndex("byType", Vertex.class).addKey(type)
.buildCompositeIndex();
PropertyKey value = mgmt.makePropertyKey("value")
.dataType(Object.class).make();
mgmt.buildIndex("byValue", Vertex.class).addKey(value)
.buildCompositeIndex();
PropertyKey index = mgmt.makePropertyKey("index")
.dataType(Integer.class).make();
mgmt.buildIndex("byIndex", Vertex.class).addKey(index)
.buildCompositeIndex();
mgmt.commit();`
Here is the search for vertices and then add vertex with 3 edges on 3GHz 2GB RAM pc. It does 830 vertices in 3 hours and I have 100,000 data its too slow. The code is below:
for (Object[] rowObj : list) {
// TXN_ID
Iterator<Vertex> iter = g.query()
.has("db_local_name", "Report Name 1")
.has("db_schema", "MPS").has("db_column", "txn_id")
.has("value", rowObj[0]).vertices().iterator();
if (iter.hasNext()) {
vertex1 = iter.next();
logger.debug("vertex1=" + vertex1.getId() + ","
+ vertex1.getProperty("db_local_name") + ","
+ vertex1.getProperty("db_schema") + ","
+ vertex1.getProperty("db_column") + ","
+ vertex1.getProperty("type") + ","
+ vertex1.getProperty("index") + ","
+ vertex1.getProperty("value"));
}
// TXN_TYPE
iter = g.query().has("db_local_name", "Report Name 1")
.has("db_schema", "MPS").has("db_column", "txn_type")
.has("value", rowObj[1]).vertices().iterator();
if (iter.hasNext()) {
vertex2 = iter.next();
logger.debug("vertex2=" + vertex2.getId() + ","
+ vertex2.getProperty("db_local_name") + ","
+ vertex2.getProperty("db_schema") + ","
+ vertex2.getProperty("db_column") + ","
+ vertex2.getProperty("type") + ","
+ vertex2.getProperty("index") + ","
+ vertex2.getProperty("value"));
}
// WALLET_ID
iter = g.query().has("db_local_name", "Report Name 1")
.has("db_schema", "MPS").has("db_column", "wallet_id")
.has("value", rowObj[2]).vertices().iterator();
if (iter.hasNext()) {
vertex3 = iter.next();
logger.debug("vertex3=" + vertex3.getId() + ","
+ vertex3.getProperty("db_local_name") + ","
+ vertex3.getProperty("db_schema") + ","
+ vertex3.getProperty("db_column") + ","
+ vertex3.getProperty("type") + ","
+ vertex3.getProperty("index") + ","
+ vertex3.getProperty("value"));
}
vertex4 = g.addVertex(null);
vertex4.setProperty("db_local_name", "Report Name 1");
vertex4.setProperty("db_schema", "MPS");
vertex4.setProperty("db_column", "amount");
vertex4.setProperty("type", "indivisual_0");
vertex4.setProperty("value", rowObj[3].toString());
vertex4.setProperty("index", i);
vertex1.addEdge("data", vertex4);
logger.debug("vertex1 added");
vertex2.addEdge("data", vertex4);
logger.debug("vertex2 added");
vertex3.addEdge("data", vertex4);
logger.debug("vertex3 added");
i++;
g.commit();
}
Is there anyway to optimize this code?
For completeness, this question was answered in the Aurelius Graphs mailing list:
https://groups.google.com/forum/#!topic/aureliusgraphs/XKT6aokRfFI
Basically:
build/use a real composite index:
mgmt.buildIndex("by_local_name_schema_value", Vertex.class).addKey(db_local_name).addKey(db_schema).addKey(value).buildComposite();
don't call g.commit() after each loop cycle, instead do something
like this: if (++1%10000 == 0) g.commit()
turn on storage.batch-loading if not already doing so
if all you can throw at cassandra is 2G of RAM consider using BerkleyDB. Cassandra prefers 4G of RAM minimum and would probably like "more"
I don't know the nature of your data, but can you pre-sort it and use BatchGraph as described in the Powers of Ten - Part I blog post and in the wiki - Using BatchGraph would prevent you from having to maintain the transaction described in number 2 above.