I'm trying to transpose a jazz lead sheet. When I transpose the Music21 stream, only the notes are being transposed. The ChordSymbols stay the same.
Here's an example in tinynotation of what I'm seeing. Note: I'm not using TinyNotation in my actual use-case, but it works for providing an example here.
from music21 import *
class HarmonyModifier(tinyNotation.Modifier):
def postParse(self, n):
cs = harmony.ChordSymbol(n.pitch.name + self.modifierData)
self.parent().stream.append(cs)
return n
tnc = tinyNotation.Converter()
tnc.modifierUnderscore = HarmonyModifier
tnc.load("4/4 d2_m7 g4_7 c_maj7")
s = tnc.parse().stream
s.show("text")
s.transpose(interval.GenericInterval(2), inPlace=True)
s.show("text")
And here's the output.
{0.0} <music21.stream.Measure 1 offset=0.0>
{0.0} <music21.clef.BassClef>
{0.0} <music21.meter.TimeSignature 4/4>
{0.0} <music21.harmony.ChordSymbol Dm7>
{0.0} <music21.note.Note D>
{2.0} <music21.harmony.ChordSymbol G7>
{2.0} <music21.note.Note G>
{3.0} <music21.harmony.ChordSymbol Cmaj7>
{3.0} <music21.note.Note C>
{4.0} <music21.bar.Barline style=final>
# Note ChordSymbols are not changed after transpose()
{0.0} <music21.stream.Measure 1 offset=0.0>
{0.0} <music21.clef.BassClef>
{0.0} <music21.meter.TimeSignature 4/4>
{0.0} <music21.harmony.ChordSymbol Dm7>
{0.0} <music21.note.Note E>
{2.0} <music21.harmony.ChordSymbol G7>
{2.0} <music21.note.Note A>
{3.0} <music21.harmony.ChordSymbol Cmaj7>
{3.0} <music21.note.Note D>
{4.0} <music21.bar.Barline style=final>
I'm using music21-4.1.0.
Related
I am trying to export all the command history from Rstudio in Mac. But in the "History" tab or using savehistory(file = ".Rhistory"), only about 500 hundred latest commands are exported. However, when I directly search commands, I can see the commands way earlier than these 500 commands, for example half of a year ago. So where does Rstudio store all these histories? How can I export them? If the final history contains the date of the commands that I input, it would be even better. Thank you a lot.
After searching with the same question, here's an answer in case anyone else comes across the same issue:
savehistory is a base R function that will save everything from the current session, whereas RStudio has a seperate database that the history window uses. The location of this depends on your operating system (more info here).
Windows:
%localappdata%\RStudio
macOS:
open ~/.local/share/rstudio
Once you've located the history_database, this helpful code by #vzemlys allows you to export the contents to a text file with "nicer formatting" (i.e. data and time stamps for each line of code history):
library(dplyr)
library(magrittr)
library(lubridate)
library(bit64)
library(stringr)
lns <- readLines("yourfilelocation/history_database") %>% str_split(pattern=":",n=2)
hist_db <- data_frame(epoch=as.integer64(sapply(lns,"[[",1)),history=sapply(lns,"[[",2))
hist_db %<>% mutate(nice_date = as.POSIXct(epoch/1000,origin = "1970-01-01",tz = "EET"))
hist_db %<>% mutate(day = ceiling_date(nice_date,unit = "day")-days(1))
hist_db %<>% dplyr::select(-epoch)
dd <- hist_db$day %>% unique %>% sort
ff <- "hist_nice.txt"
cat("R history","\n",rep("-",80),"\n",file=ff,sep="")
for(i in 1:length(dd)) {
cat("\n\n",format(dd[i]),"\n",rep("-",80),"\n",file=ff,sep="",append=TRUE)
hist_db %>% filter(day==dd[i]) %>% dplyr::select(nice_date,history) %>% arrange(nice_date) %>%
write.table(ff,sep="\t", quote=F, row.names=FALSE, col.names=FALSE, append=TRUE)
}
Example output:
--------------------------------------------------------------------------------
R history
--------------------------------------------------------------------------------
2022-06-08
--------------------------------------------------------------------------------
2022-06-08 13:32:57 test <- testfile %>% pivot_wider(names_from=scenario, values_from=sst) %>% na.omit()
2022-06-08 13:32:57 ggplot() + theme_bw() +
2022-06-08 13:32:57 geom_tile(data = test, aes(x,y,colour=layer), alpha=0.8) +
2022-06-08 13:32:57 scale_color_viridis() +
2022-06-08 13:32:57 coord_equal() +
2022-06-08 13:32:57 theme(legend.position="bottom")
--------------------------------------------------------------------------------
i have the following baseline:
and as it can be seen, it has an almost sinusoidal shape. i am trying to use polyfit on it. Actually what I have are two arrays of data,one called x and the other y. So what i am using is:
porder = 2
coefs = np.polyfit(x, y, porder)
baseline = np.poly1d(coefs)
cleanspec = y - baseline(x)
My goal is to obtain a clean spectrum in the end, who has a straight baseline with no ondulation.
However, the fitting is not working. Any suggestions on using another more efficient method?
I have tried changing porder to 3, but i have this warning, and it doesn't change anything:
Polyfit may be poorly conditioned
My data for x:
[1.10192816e+11 1.10192893e+11 1.10192969e+11 1.10193045e+11
1.10193122e+11 1.10193198e+11 1.10193274e+11 1.10193350e+11
1.10193427e+11 1.10193503e+11 1.10193579e+11 1.10193656e+11
1.10193732e+11 1.10193808e+11 1.10193885e+11 1.10193961e+11
1.10194037e+11 1.10194113e+11 1.10194190e+11 1.10194266e+11
1.10194342e+11 1.10194419e+11 1.10194495e+11 1.10194571e+11
1.10194647e+11 1.10194724e+11 1.10194800e+11 1.10194876e+11
1.10194953e+11 1.10195029e+11 1.10195105e+11 1.10195182e+11
1.10195258e+11 1.10195334e+11 1.10195410e+11 1.10195487e+11
1.10195563e+11 1.10195639e+11 1.10195716e+11 1.10195792e+11
1.10195868e+11 1.10195944e+11 1.10196021e+11 1.10196097e+11
1.10196173e+11 1.10196250e+11 1.10196326e+11 1.10196402e+11
1.10196479e+11 1.10196555e+11 1.10196631e+11 1.10196707e+11
1.10196784e+11 1.10196860e+11 1.10196936e+11 1.10197013e+11
1.10197089e+11 1.10197165e+11 1.10197241e+11 1.10197318e+11
1.10197394e+11 1.10197470e+11 1.10197547e+11 1.10197623e+11
1.10197699e+11 1.10197776e+11 1.10197852e+11 1.10197928e+11
1.10198004e+11 1.10198081e+11 1.10198157e+11 1.10198233e+11
1.10198310e+11 1.10198386e+11 1.10198462e+11 1.10198538e+11
1.10198615e+11 1.10198691e+11 1.10198767e+11 1.10198844e+11
1.10198920e+11 1.10198996e+11 1.10199073e+11 1.10199149e+11
1.10199225e+11 1.10199301e+11 1.10199378e+11 1.10199454e+11
1.10199530e+11 1.10199607e+11 1.10199683e+11 1.10199759e+11
1.10199835e+11 1.10199912e+11 1.10199988e+11 1.10200064e+11
1.10200141e+11 1.10202582e+11 1.10202658e+11 1.10202735e+11
1.10202811e+11 1.10202887e+11 1.10202963e+11 1.10203040e+11
1.10203116e+11 1.10203192e+11 1.10203269e+11 1.10203345e+11
1.10203421e+11 1.10203498e+11 1.10203574e+11 1.10203650e+11
1.10203726e+11 1.10203803e+11 1.10203879e+11 1.10203955e+11
1.10204032e+11 1.10204108e+11 1.10204184e+11 1.10204260e+11
1.10204337e+11 1.10204413e+11 1.10204489e+11 1.10204566e+11
1.10204642e+11 1.10204718e+11 1.10204795e+11 1.10204871e+11
1.10204947e+11 1.10205023e+11 1.10205100e+11 1.10205176e+11
1.10205252e+11 1.10205329e+11 1.10205405e+11 1.10205481e+11
1.10205557e+11 1.10205634e+11 1.10205710e+11 1.10205786e+11
1.10205863e+11 1.10205939e+11 1.10206015e+11 1.10206092e+11
1.10206168e+11 1.10206244e+11 1.10206320e+11 1.10206397e+11
1.10206473e+11 1.10206549e+11 1.10206626e+11 1.10206702e+11
1.10206778e+11 1.10206854e+11 1.10206931e+11 1.10207007e+11
1.10207083e+11 1.10207160e+11 1.10207236e+11 1.10207312e+11
1.10207389e+11 1.10207465e+11 1.10207541e+11 1.10207617e+11
1.10207694e+11 1.10207770e+11 1.10207846e+11 1.10207923e+11
1.10207999e+11 1.10208075e+11 1.10208151e+11 1.10208228e+11
1.10208304e+11 1.10208380e+11 1.10208457e+11 1.10208533e+11
1.10208609e+11 1.10208686e+11 1.10208762e+11 1.10208838e+11
1.10208914e+11 1.10208991e+11 1.10209067e+11 1.10209143e+11
1.10209220e+11 1.10209296e+11 1.10209372e+11 1.10209448e+11
1.10209525e+11 1.10209601e+11 1.10209677e+11 1.10209754e+11
1.10209830e+11]
and for y:
[ 0.00143858 0.05495827 0.07481739 0.03287334 -0.06275658 0.03744501
-0.04392341 0.02849104 0.03173781 0.09748282 0.02854265 0.06573162
0.08215295 0.0240697 0.00931477 0.17572605 0.06783381 0.04853354
-0.00226023 0.03722596 0.09687121 0.10767829 0.04922701 0.08036865
0.02371989 0.13885361 0.13903188 0.09910567 0.08793601 0.06048823
0.03932097 0.04061129 0.03706228 0.13764936 0.14150589 0.12226208
0.09041878 0.13638676 0.11107155 0.12261369 0.11765545 0.07425344
0.06643712 0.1449991 0.14256909 0.0924173 0.09291525 0.12216271
0.11272059 0.07618891 0.16787807 0.07832849 0.10786856 0.12381844
0.14182937 0.08078092 0.11932429 0.06383649 0.02923562 0.0864741
0.07806758 0.04514088 0.12929371 0.11769577 0.03619867 0.02811366
0.06401639 0.06883735 0.01162673 0.0956252 0.11206549 0.0485106
0.07269545 0.01662149 0.01287365 0.13401546 0.06300487 0.01994627
0.00721926 0.04863274 -0.01578364 0.0235379 0.03102316 0.00392559
0.05662182 0.04643381 -0.00665026 0.05532307 -0.01533339 0.04838893
0.02097954 0.02551123 0.03727188 -0.04001189 -0.04294883 0.02837669
-0.06062512 -0.0743994 -0.04665618 -0.03553261 -0.07057554 -0.07028277
-0.07502298 -0.07247965 -0.03540266 -0.03226398 -0.08014487 -0.11907543
-0.18521053 -0.1117617 -0.14377897 -0.07113503 -0.02480966 -0.07459746
-0.07994097 -0.02648713 -0.10288478 -0.13328137 -0.08121377 -0.13742166
-0.024583 -0.11391389 -0.02717251 -0.08876166 -0.04369363 -0.0790144
-0.09589054 -0.12058701 0.00041344 -0.06646403 -0.06368366 -0.10335613
-0.04508286 -0.18360729 -0.0551775 -0.06476622 -0.0834523 -0.01276785
-0.04145486 -0.14549992 -0.11186823 -0.07663398 -0.11920359 -0.0539315
-0.10507118 -0.09112374 -0.09751319 -0.06848278 -0.09031172 -0.07218853
-0.03129234 -0.04543539 -0.00942861 -0.06711099 -0.00712202 -0.11696418
-0.06344093 0.03624227 -0.04798777 0.01174394 -0.08326314 -0.06761215
-0.12063419 -0.05236908 -0.03914692 -0.05370061 -0.01620056 0.06731788
-0.06600111 -0.04601257 -0.02144361 0.00256863 -0.00093034 0.00629604
-0.0252835 -0.00907992 0.03583489 -0.03761906 0.10325763 0.08016437
-0.04900467 0.0110328 0.05019604 -0.04428984 -0.03208058 0.05095359
-0.01807463 0.0691733 0.07472691 0.00659871 0.00947692 0.0014422
0.05227057]
Having this huge offset in x is probably not helping. It definitively works when removing it for the fitting process. Looks like this:
import matplotlib.pyplot as plt
import numpy as np
scaledx = xdata * 1e-8 - 1100
coefs = np.polyfit( scaledx, ydata, 7)
base = np.poly1d( coefs )
xt = np.linspace( 1.9,2.1,150)
yt = base( xt )
fig = plt.figure()
ax = fig.add_subplot( 2, 1, 1 )
bx = fig.add_subplot( 2, 1, 2 )
ax.scatter( scaledx , ydata )
ax.plot( xt , yt )
bx.plot( scaledx , ydata - base( scaledx ) )
plt.show()
with xdata and ydata being numpy arrays of the OP data lists.
Provides:
Addon
Concerning the poorly conditioned one should remember how simple linear optimization works. In case of a polynomial one builds the matrix:
A = [
[1, x1, x1**2, ...],
[1, x2, x2**2, ...],
...
[1, xn, xn**2, ...]
]
and one needs B^(-1) the inverse of B with B = AT.A and AT being the transposed of A. Now looking at the x values in the order of 1e11, B will have order 1 on one side of the diagonal and for a second order polynomial order 1e44 on the other. In case of a third order polynomial this is getting worse, accordingly. Making the inverse, hence, is becoming unstable, numerically. Luckily, and as used above, this can be solved easily by simple re-scaling of the problem at hand.
I'm trying to embed python in my C application. I download the package in python official website and manage to do a simple Hello World.
Now I want to go deeper and use some libraries of python like numpy, keras, tensorflow...
I'm working with Python 3.5.4, I installed all the needed package on my PC with pip3 :
pip3 install keras
pip3 install tensorflow
...
then I created my script and launch it in python environment, it works fine :
Python:
# Importing the libraries
#
import numpy as np
import pandas as pd
dataset2 = pd.read_csv('I:\RNA\dataset19.csv')
X_test = dataset2.iloc[:, 0:228].values
y_test = dataset2.iloc[:, 228].values
# 2.
import pickle
sc = pickle.load(open('I:\RNA\isVerb_sc', 'rb'))
X_test = sc.transform(X_test)
# 3.
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
classifier = Sequential()
classifier.add(Dense(units = 114, kernel_initializer = 'uniform', activation = 'relu', input_dim = 228))
classifier.add(Dropout(p = 0.3))
classifier.add(Dense(units = 114, kernel_initializer = 'uniform', activation = 'relu'))
classifier.add(Dropout(p = 0.3))
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
classifier.load_weights('I:\RNA\isVerb_weights.h5')
y_pred = classifier.predict(X_test)
y_pred1 = (y_pred > 0.5)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred1)
But when I execute the same script in a C environment with embed python it didn't work :
At first, I execute my script directly with PyRun_SimpleFile with no luck, so I sliced it in multiple instructions with PyRun_SimpleString to detect the problem :
C:
result = PyRun_SimpleString("import numpy as np"); // result = 0 (ok)
result = PyRun_SimpleString("import pandas as pd"); // result = 0 (ok)
...
result = PyRun_SimpleString("import pickle"); // result = 0 (ok)
... (all insctruction above works)
result = PyRun_SimpleString("import keras"); // result = -1 !!
... (all under this failed)
but there is not a single stack trace about this error, I tried this but I just got :
"Here's the output: (null)"
My initialization of Python in C seems correct since others libraries import fine :
// Python
wchar_t *stdProgramName = L"I:\\LIBs\\cpython354";
Py_SetProgramName(stdProgramName);
wchar_t *stdPythonHome = L"I:\\LIBs\\cpython354";
Py_SetPythonHome(stdPythonHome);
wchar_t *stdlib = L"I:\\LIBs\\cpython354;I:\\LIBs\\cpython354\\Lib\\python35.zip;I:\\LIBs\\cpython354\\Lib;I:\\LIBs\\cpython354\\DLLs;I:\\LIBs\\cpython354\\Lib\\site-packages";
Py_SetPath(stdlib);
// Initialize Python
Py_Initialize();
When inside a Python cmd, the line import keras take some time (3sec) but works (a warning but I found no harm around it) :
>>> import keras
I:\LIBs\cpython354\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
>>>
I'm at loss now, I don't know where to look at since there is no stack trace.
it seems like when you import keras, it executes this line :
sys.stderr.write('Using TensorFlow backend.\n')
or sys.stderr was not defined in python embedded on windows
A simple correction is to define sys.stderr, for example :
import sys
class CatchOutErr:
def __init__(self):
self.value = ''
def write(self, txt):
self.value += txt
catchOutErr = CatchOutErr()
sys.stderr = catchOutErr
I have used the example described here (http://openmdao.readthedocs.org/en/1.5.0/usr-guide/tutorials/doe-drivers.html?highlight=driver) to show my problem. I want to use the same approach for one component were "params" are array and no longer float . See example below
from openmdao.api import IndepVarComp, Group, Problem, ScipyOptimizer, ExecComp, DumpRecorder, Component
from openmdao.drivers.latinhypercube_driver import LatinHypercubeDriver, OptimizedLatinHypercubeDriver
import numpy as np
class Paraboloid(Component):
""" Evaluates the equation f(x,y) = (x-3)^2 + xy + (y+4)^2 - 3 """
def __init__(self):
super(Paraboloid, self).__init__()
self.add_param('x', val=0.0)
self.add_param('y', val=0.0)
self.add_output('f_xy', val=0.0)
def solve_nonlinear(self, params, unknowns, resids):
"""f(x,y) = (x-3)^2 + xy + (y+4)^2 - 3
"""
x = params['x']
y = params['y']
unknowns['f_xy'] = (x-3.0)**2 + x*y + (y+4.0)**2 - 3.0
def linearize(self, params, unknowns, resids):
#""" Jacobian for our paraboloid."""
x = params['x']
y = params['y']
J = {}
J['f_xy', 'x'] = 2.0*x - 6.0 + y
J['f_xy', 'y'] = 2.0*y + 8.0 + x
return J
class ParaboloidArray(Component):
""" Evaluates the equation f(x,y) = (x-3)^2 + xy + (y+4)^2 - 3 """
def __init__(self):
super(ParaboloidArray, self).__init__()
self.add_param('X', val=np.array([0., 0.]))
self.add_output('f_xy', val=0.0)
def solve_nonlinear(self, params, unknowns, resids):
"""f(x,y) = (x-3)^2 + xy + (y+4)^2 - 3
"""
x = params['X'][0]
y = params['y'][1]
unknowns['f_xy'] = (x-3.0)**2 + x*y + (y+4.0)**2 - 3.0
top = Problem()
root = top.root = Group()
root.add('p1', IndepVarComp('x', 50.0), promotes=['*'])
root.add('p2', IndepVarComp('y', 50.0), promotes=['*'])
root.add('comp', Paraboloid(), promotes=['*'])
top.driver = OptimizedLatinHypercubeDriver(num_samples=4, seed=0, population=20, generations=4, norm_method=2)
top.driver.add_desvar('x', lower=-50.0, upper=50.0)
top.driver.add_desvar('y', lower=-50.0, upper=50.0)
top.driver.add_objective('f_xy')
top.setup()
top.run()
top.cleanup()
###########################
print("case float ok")
top = Problem()
root = top.root = Group()
root.add('p1', IndepVarComp('X', np.array([50., 50.])), promotes=['*'])
root.add('comp', ParaboloidArray(), promotes=['*'])
top.driver = OptimizedLatinHypercubeDriver(num_samples=4, seed=0, population=20, generations=4, norm_method=2)
top.driver.add_desvar('X', lower=np.array([-50., -50.]), upper=np.array([50., 50.]))
top.driver.add_objective('f_xy')
top.setup()
top.run()
top.cleanup()
I obtain the following error :
Traceback (most recent call last):
File "C:\Program Files (x86)\Wing IDE 101 5.0\src\debug\tserver\_sandbox.py", line 102, in <module>
File "D:\tlefeb\Anaconda2\Lib\site-packages\openmdao\core\problem.py", line 1038, in run
self.driver.run(self)
File "D:\tlefeb\Anaconda2\Lib\site-packages\openmdao\drivers\predeterminedruns_driver.py", line 108, in run
for run in runlist:
File "D:\tlefeb\Anaconda2\Lib\site-packages\openmdao\drivers\latinhypercube_driver.py", line 57, in _build_runlist
design_var_buckets = self._get_buckets(bounds['lower'], bounds['upper'])
File "D:\tlefeb\Anaconda2\Lib\site-packages\openmdao\drivers\latinhypercube_driver.py", line 101, in _get_buckets
bucket_walls = np.linspace(low, high, self.num_samples + 1)
File "D:\tlefeb\Anaconda2\Lib\site-packages\numpy\core\function_base.py", line 102, in linspace
if step == 0:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Did I misunderstood something in my way of coding ?
I get a different error than you, using the the latest OpenMDAO master, but I get an error non-the-less. There isn't anything wrong with the mode, but rather there are some bugs with using array variables for DOEs. I've added a bug-fix story to the OpenMDAO backlog, which we'll hopefully be able to deal with in the next couple weeks. We'd gladly accept a pull request if you develop a fix before we get to it though.
- 1 2 3
1 a b c
2 d e f
3 g h i
When clicking on 1 to select a, b and c, what signal does tableWidget send.
I only found the answer (I think it's the answer to this question anyway) in C. Which is the following
connect( (QObject*) this->verticalHeader(), SIGNAL( sectionClicked(int) ), this, SLOT( rowSelection( int ) ) );
I tried to implement it in to python like so:
self.QTableWidget.verticalHeader.sectionClicked.connect(self.Test)
But it says that verticalHeader has no attribute sectionClicked.
I think the signal that's emitted is correct, but your code is not quite right. The following small example seems to work for me:
from PySide import QtGui
import sys
class W(QtGui.QTableWidget):
def __init__(self):
super(W,self).__init__(3, 3)
self.verticalHeader().sectionClicked.connect(self.onSectionClicked)
def onSectionClicked(self, logicalIndex):
print("onSectionClicked:", logicalIndex)
if __name__ == "__main__":
app = QtGui.QApplication(sys.argv)
w = W()
w.show()
sys.exit(app.exec_())
So, from the QTableWidget instance, you need to obtain the vertical header object by calling verticalHeader(), which has the signal sectionClicked you mentioned.