How to enable formatted Xdebug errors and traces - xdebug

I am not sure when or what I have changed, but suddenly xdebug no longer renders its formatted stacktraces. Instead, it renders the stacktrtace without any HTML; here is an example; Whereas I would expect the orange tables like here
I have searched the documentation, but cannot find any reference to a setting or config that would (un)set this. What did I do wrong?
My xdebug.ini (Ubuntu, so /etc/php5/conf.d/xdebug.ini) is small:
zend_extension=/usr/lib/php5/20090626+lfs/xdebug.so
xdebug.default_enable = 1
xdebug.auto_trace = 1
xdebug.remote_enable = 1
xdebug.remote_port = 9010
xdebug.remote_host = audrey
; xdebug.profiler_enable = 1
; Markup of var_dump
xdebug.overload_var_dump = 1

Xdebug respects PHP's normal settings regarding error reporting and formatting. In this case, you have most likely "html_errors" set to Off in php.ini. Turn it back to On, and Xdebug should show nice orange tables again.
cheers,
Derick

Try this
ini_set('xdebug.auto_trace', 1);
http://php.net/manual/en/function.ini-set.php

Related

Biblatex doesn't compile. Probably .bib file not recognised

I've spent many hours trying to get my bibliography working - unsuccessfully. I suspect that, somehow, my .bib file doesn't get recognised.
Help would be greatly appreciated.
MWE:
\documentclass[a4paper, 12pt]{article}
\usepackage{array}
\usepackage{lscape}
\usepackage[paper=portrait,pagesize]{typearea}
\usepackage[showframe=false]{geometry}
\usepackage{changepage}
\usepackage{tabularx}
\usepackage{graphicx}
\usepackage{adjustbox}
\usepackage[utf8]{inputenc}
\usepackage{babel,csquotes,xpatch}
\usepackage[backend=biber,style=authoryear, natbib]{biblatex}
\addbibresource{test.bib}
\usepackage{xurl}
\usepackage[colorlinks,allcolors=blue]{hyperref}
\begin{document}
This is a test... test test\\
\cite{glaeser_gyourko}\\
\cite{hsieh-moretti:2019}\\
\cite{glaeser_gyourko}\\
\printbibliography
\end{document}
test.bib file:
#article{hsieh-moretti:2019,
Author = {Hsieh, Chang-Tai and Moretti, Enrico},
Title = {Housing Constraints and Spatial Misallocation},
Journal = {American Economic Journal: Macroeconomics},
Volume = {11},
Number = {2},
Year = {2019},
Month = {4},
Pages = {1-39},
DOI = {10.1257/mac.20170388},
URL = {https://www.aeaweb.org/articles?id=10.1257/mac.20170388}
}
#article{glaeser_gyourko,
Author = {Glaeser, Edward and Gyourko, Joseph},
Title = {The Economic Implications of Housing Supply},
Journal = {Journal of Economic Perspectives},
Volume = {32},
Number = {1},
Year = {2018},
Month = {2},
Pages = {3-30},
DOI = {10.1257/jep.32.1.3},
URL = {https://www.aeaweb.org/articles?id=10.1257/jep.32.1.3}
}
In PDF it looks like this: enter image description here
I get the following information in the source viewer:
Process started
INFO - This is Biber 2.14 INFO - Logfile is 'test.blg' INFO - Reading
'test.bcf' INFO - Found 2 citekeys in bib section 0 INFO - Processing
section 0 INFO - Globbing data source 'test.bib' INFO - Globbed data
source 'test.bib' to test.bib INFO - Looking for bibtex format file
'test.bib' for section 0 INFO - LaTeX decoding ... INFO - Found
BibTeX data source 'test.bib'
Process exited with error(s)
I use texmaker 5.0.4 on MacOS and I post my configurations here:
enter image description here enter image description here
I really have very little idea on what goes on. Today, I started a work session, added a new source and it didn't work. I deleted the new source so that the bibliography would be the same as prior to me changing it, and it didn't work either. So, this let's me assume that, somehow, the program doesn't understand where the bibliography is. The .bib file and the document are in the same folder.
What I tried:
Triple checked code in bibliography using tools such as https://biblatex-linter.herokuapp.com/
Clear the cache of all documents.
change the natbib in the command \usepackage[backend=biber,style=authoryear, natbib]{biblatex} to biber -> doesn't seem to work.
Left out natbib and got same result. \usepackage[backend=biber,style=authoryear, natbib]{biblatex} => \usepackage[backend=biber,style=authoryear]{biblatex}
Add the command \usepackgage{natbitb} in addition to biblatex but this produces compatibility issues.
Add the codes \usepackage[utf8]{inputenc} &
\usepackage{babel,csquotes,xpatch} because they are recommendet by this biblatex cheat sheet: http://tug.ctan.org/info/biblatex-cheatsheet/biblatex-cheatsheet.pdf. Didn't change anything.
Thanks for your time!
I had a similar problem, what helped me was looking up the articles and rewriting them via the Google Scholar bibTex version.
The problem arose as I changed the name manually. This results in an error which is not recognized. And this threw me into researching exactly the same kind of .bib file not recognized error.
Your housing article should be formatted like this:
#article{hsieh2019housing,
title={Housing constraints and spatial misallocation},
author={Hsieh, Chang-Tai and Moretti, Enrico},
journal={American Economic Journal: Macroeconomics},
volume={11},
number={2},
pages={1--39},
year={2019}
}
I found another source of this problem Citavi generates invalid bibtex syntax. Often the year field is not correctly filled or special characters are not escaped properly. Maybe these are data errors which have their origin in the sources not in Citavi, but nonetheless often Citavi does not export valid bibtex format.

Trac ticket-custom permission

I created a custom field checkbox in trac. Written here.
How to make the field were only available for users to TRAC_ADMIN?
example trac.ini
[ticket-custom]
newfield = checkbox
newfield.label = Checkbox field name
newfield.value = 0
newfield.permissions = TRAC_ADMIN
newfield.permissions - does not work
Thank you.
You could use BlackMagicTicketTweaksPlugin. The feature is proposed in #9289, and I will probably work on adding the feature to Trac for release 1.4 (probably about a year away since 1.2 is just about to come out).

Clicking checkbox on web page using Applescript

I'm somewhat new to Applescript, and I am trying to make Applescript check a checkbox to select it. I want the checkbox to be clicked regardless of whether or not it's already checked. Here is the checkbox's location according to the Accessibility Inspector:
<AXApplication: “Safari”>
<AXWindow: “Studio”>
<AXGroup>
<AXGroup>
<AXGroup>
<AXScrollArea: “”>
<AXWebArea: “”>
<AXGroup: “”>
<AXCheckBox: “”>
Attributes:
AXRole: “AXCheckBox”
AXSubrole: “(null)”
AXRoleDescription: “check box”
AXChildren: “<array of size 0>”
AXHelp: “”
AXParent: “<AXGroup: “”>”
AXPosition: “x=1104 y=825”
AXSize: “w=18 h=19”
AXTitle: “”
AXDescription: “”
AXValue: “0”
AXFocused (W): “0”
AXEnabled: “1”
AXWindow: “<AXWindow: “Studio”>”
AXSelectedTextMarkerRange (W): “<AXTextMarkerRange 0x101937860 [0x7fff76e43fa0]>{startMarker:<AXTextMarker 0x1019378b0 [0x7fff76e43fa0]>{length = 24, bytes = 0xac01000000000000c0366e23010000001700000001000000} endMarker:<AXTextMarker 0x101938030 [0x7fff76e43fa0]>{length = 24, bytes = 0xac01000000000000c0366e23010000001700000001000000}}”
AXStartTextMarker: “<AXTextMarker 0x101938030 [0x7fff76e43fa0]>{length = 24, bytes = 0xa00000000000000098975e0d010000000000000001000000}”
AXEndTextMarker: “<AXTextMarker 0x1019378b0 [0x7fff76e43fa0]>{length = 24, bytes = 0xa200000000000000405e7812010000000000000001000000}”
AXVisited: “0”
AXLinkedUIElements: “(null)”
AXSelected: “0”
AXBlockQuoteLevel: “0”
AXTopLevelUIElement: “<AXWindow: “Studio”>”
AXTitleUIElement: “(null)”
AXAccessKey: “(null)”
AXRequired: “0”
AXInvalid: “false”
AXARIABusy: “0”
Actions:
AXPress - press
AXShowMenu - show menu
I've tried multiple methods to get this to work, and I haven't been able to. Any help is appreciated.
Your question with the Accessibility Inspector info is not very helpful I am afraid.
It would help if we could see the actual elements of the web page,
Have a look at this page which I found that shows check boxes and the code that makes it up.
Each element has a name and maybe within some other element.
on the page I can use this Applescript/Javascript to check the check1 checkbox.
Hopefully this will give you an idea of how to go about it.
But remember this code snippet is tailored to this page.
Open the web page and run this applescript
tell application "Safari"
set doc to document 1
do JavaScript "document.forms['testform']['check1'].checked = true" in doc
end tell
Update: Applescript GUI
Update:2 take into account "clicked regardless of whether or not it's already checked"
Taking a punt with your Accessibility Inspector. Which is a bit useless (not your fault)
try:
activate application "Safari"
tell application "System Events"
set theCheckbox to (checkbox 1 of group 3 of UI element 1 of scroll area 1 of group 1 of group 1 of group 2 of window 1 of application process "Safari")
set isEnabled to value of theCheckbox as boolean
if not isEnabled then
click theCheckbox
end if
end tell

R tm: reloading a 'PCorpus' backend filehash database as corpus (e.g. in restarted session/script)

Having learned loads from answers on this site (thanks!), it's finally time to ask my own question.
I'm using R (tm and lsa packages) to create, clean and simplify, and then run LSA (latent semantic analysis) on, a corpus of about 15,000 text documents. I'm doing this in R 3.0.0 under Mac OS X 10.6.
For efficiency (and to cope with having too little RAM), I've been trying to use either the 'PCorpus' (backend database support supported by the 'filehash' package) option in tm, or the newer 'tm.plugin.dc' option for so-called 'distributed' corpus processing). But I don't really understand how either one works under the bonnet.
An apparent bug using DCorpus with tm_map (not relevant right now) led me to do some of the preprocessing work with the PCorpus option instead. And it takes hours. So I use R CMD BATCH to run a script doing things like:
> # load corpus from predefined directory path,
> # and create backend database to support processing:
> bigCcorp = PCorpus(bigCdir, readerControl = list(load=FALSE), dbControl = list(useDb = TRUE, dbName = "bigCdb", dbType = "DB1"))
> # converting to lower case:
> bigCcorp = tm_map(bigCcorp, tolower)
> # removing stopwords:
> stoppedCcorp = tm_map(bigCcorp, removeWords, stoplist)
Now, supposing my script crashes soon after this point, or I just forget to export the corpus in some other form, and then I restart R. The database is still there on my hard drive, full of nicely tidied-up data. Surely I can reload it back into the new R session, to carry on with the corpus processing, instead of starting all over again?
It feels like a noodle question... but no amount of dbInit() or dbLoad() or variations on the 'PCorpus()' function seem to work. Does anyone know the correct incantation?
I've scoured all the related documentation, and every paper and web forum I can find, but total blank - nobody seems to have done it. Or have I missed it?
The original question was from 2013. Meanwhile, in Feb 2015, a duplicate, or similar question, has been answered:
How to reconnect to the PCorpus in the R tm package?. That answer in that post is essential, although pretty minimalist, so I'll try to augment it here.
These are some comments I've just discovered while working on a similar problem:
Note that the dbInit() function is not part of the tm package.
First you need to install the filehash package, which the tm-Documentation only "suggests" to install. This means it is not a hard dependency of tm.
Supposedly, you can also use the filehashSQLite package with library("filehashSQLite") instead of library("filehash"), and both of these packages have the same interface and work seamlesslessly together, due to object-oriented design. So also install "filehashSQLite" (edit 2016: some functions such as tn::content_transformer() are not implemented for filehashSQLite).
then this works:
library(filehashSQLite)
# this string becomes filename, must not contain dots.
# Example: "mydata.sqlite" is not permitted.
s <- "sqldb_pcorpus_mydata" #replace mydat with something more descriptive
suppressMessages(library(filehashSQLite))
if(! file.exists(s)){
# csv is a data frame of 900 documents, 18 cols/features
pc = PCorpus(DataframeSource(csv), readerControl = list(language = "en"), dbControl = list(dbName = s, dbType = "SQLite"))
dbCreate(s, "SQLite")
db <- dbInit(s, "SQLite")
set.seed(234)
# add another record, just to show we can.
# key="test", value = "Hi there"
dbInsert(db, "test", "hi there")
} else {
db <- dbInit(s, "SQLite")
pc <- dbLoad(db)
}
show(pc)
# <<PCorpus>>
# Metadata: corpus specific: 0, document level (indexed): 0
#Content: documents: 900
dbFetch(db, "test")
# remove it
rm(db)
rm(pc)
#reload it
db <- dbInit(s, "SQLite")
pc <- dbLoad(db)
# the corpus entries are now accessible, but not loaded into memory.
# now 900 documents are bound via "Active Bindings", created by makeActiveBinding() from the base package
show(pc)
# [1] "1" "2" "3" "4" "5" "6" "7" "8" "9"
# ...
# [900]
#[883] "883" "884" "885" "886" "887" "888" "889" "890" "891" "892"
#"893" "894" "895" "896" "897" "898" "899" "900"
#[901] "test"
dbFetch(db, "900")
# <<PlainTextDocument>>
# Metadata: 7
# Content: chars: 33
dbFetch(db, "test")
#[1] "hi there"
This is what the database backend looks like. You can see that the documents from the data frame have been encoded somehow, inside the sqlite table.
This is what my RStudio IDE shows me:

What could be causing db.SubmitChanges() to not work in linq-to-sql?

I've set up a very simple example with LINQ-TO-SQL in WPF.
I can get an object (pageItem) out like this and I can change the property and when I call SubmitChanges() it gives me no error but it doesn't save the change.
MainDataContext db = new MainDataContext();
var pageItem = (from p in db.PageItems
where p.Id == 1
select p).SingleOrDefault();
pageItem.Title = "changed";
db.SubmitChanges();
What could be causing SubmitChanges not to submit the changes?
MORE INFO:
This doesn't work either, even the db.ExecuteCommand doesn't work, and strangely when debugging F11 doesn't step into SubmitChanges() or ExecuteCommand(), why can't I step in those?
using (var db = new MainDataContext())
{
var pageItem = (from p in db.PageItems
where p.Id == 1
select p).SingleOrDefault();
pageItem.Title = "changed";
db.SubmitChanges();
db.ExecuteCommand("INSERT INTO PageItems (Title) VALUES ('this is the title')");
if (pageItem != null)
MainContent.Children.Add(new QuickForm(pageItem));
}
more info:
The db.Log = Console.Out gives me this:
SELECT [t0].[Id], [t0].[IdCode], [t0].[Title], [t0].[Description], [t0].[DisplayOrder]
FROM [dbo].[PageItems] AS [t0]
WHERE [t0].[Id] = #p0
'TestPageManager23434.vshost.exe' (Managed): Loaded 'C:\Windows\assembly\GAC_MSIL\PresentationFramework.resources\3.0.0.0_de_31bf3856ad364e35\PresentationFramework.resources.dll'
-- #p0: Input Int (Size = 0; Prec = 0; Scale = 0) [1]
-- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 3.5.30729.1
INSERT INTO PageItems (Title) VALUES ('this is the title')
-- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 3.5.30729.1
The thread 0x1190 has exited with code 0 (0x0).
ANSWER
The solution was three-fold:
I was changing a different database than I was looking at in visual studio, solution:
var db = new MainDataContext(#"C:\Users\TestUser\Documents\Visual Studio 2008\Projects\TestPageManager23434\TestPageManager23434\Data\Main.mdf"))
that made Update work but not SubmitChanges(), solution was to set the primary key.
it still wasn't showing all the chagnes, problem was I had a number of "show data" windows open which weren't being updated
This can happen if you don't have a primary key defined on the tables in SQL Server
For some reason the context may not be tracking changes. Try wiring up your db.Log to a writer and inspect what LINQ->SQL is doing when you call SubmitChanges()..
db.Log = Console.Out;
Then you can watch your output window running in debug and see what is going on.
Are you using SQL Expression mdf file?
There's an article about how this might cause you to get a copy of the file and not the original, causing the symptoms you're describing.
FTA:
I think the project system or server
explorer wizard offers to 'copy' your
mdf into your project directory.
Maybe you are operating on a copy of
the database and viewing the other in
server explorer.
I had the same problem in which the record in the Database I see in my project was not modified by the SubmitChanges method.
After so many trial and researches I found out that the system put another version of the Database Northwnd.mdf in the project's root directory \Bin\Debug\Northwnd.mdf. That is where the changes, perfectly, occurred.

Resources