TRS-80 POKE Cursor Table Smiley? - cursors

I'm trying to remember the POKE command to change the cursor on a TRS-80 Computer to a smiley face.
POKE 16419,x will change the cursor.
I cannot find a list of "x's"!
I know that 255 gives a rocketship.
Somewhere between 249-250 are gender symbols, etc.
Is there a table somewhere with all these symbols? I have googled and googled and due to the onset of SMILEY emoji, I think the answer to this is obfuscated...
Here is a nice place to try these, by the way:
http://people.cs.ubc.ca/~pphillip/trs80.html
Thanks,
Rose

FOR X=0 TO 255 : POKE 15360+X, X : NEXT X
will print the characters to the video memory. The smiley is 196. Thanks to Adam V.L. for the help!
Here is a screenshot of the character set and the smiley cursor

This site has a list of characters for the TRS-80 and its equivalent in UNICODE.
The smiley is the decimal 196 (UNICODE codepoint is U+263a):
196 ☺ ☺ ☺

Related

What SQL Server DataType Should I Store an Email MessageId As? [duplicate]

I'm looking for the maximum character length allowed for an internet Message-ID field for validation purposes within an application. I've reviewed sources such as RFC-2822 and Wikipedia "Message-ID" as well as this SO question, among other various places. The closest answer I can find is "998 characters" because that is the maximum length that the specification allows for each line in an internet message (from RFC-2822), and the Message-ID field cannot be multiple lines.
Is 998 characters the definitive answer? Is there no such limit?
If there's one thing I've learned about email, it must be that it's a massively distributed system for fuzzing email software. That is, no matter what the RFCs say, you will find emails violating them, some email software coping and some failing. I think most will limp along with the robustness principle in mind.
With that out of the way, I think the maximum RFC compliant Message-ID length is 995 characters.
The maximum line length per the RFC you cite is 998 characters. That would include the "Message-ID:" field name, but you can do line folding between the field name and the field body. The line containing the actual Message-ID would then contain a space (the folding whitespace), "<", Message-ID, and ">". Semantically, the angle brackets are not part of the Message-ID. Therefore you end up with a maximum of 998 - 3 = 995 characters.
Actually there's no limit
RFC2822 defines these productions:
message-id = "Message-ID:" msg-id CRLF
msg-id = [CFWS] "<" id-left "#" id-right ">" [CFWS]
id-left = dot-atom-text / no-fold-quote / obs-id-left
obs-id-left = local-part
local-part = dot-atom / quoted-string / obs-local-part
quoted-string = [CFWS]
DQUOTE *([FWS] qcontent) [FWS] DQUOTE
[CFWS]
CFWS = *([FWS] comment) (([FWS] comment) / FWS)
FWS = ([*WSP CRLF] 1*WSP) / ; Folding white space
So id-left can be local-part which can be quoted-string (and thus have multiple FWS)
so you can fold it as many times as needed to fit any arbitrary
length of payload and still comply with the restrictions given
by the RFC.
It's quite wilde guess, but i would say 2000 chars is more than enough and here is why:
The only related length requirement I found is message line can't be longer than 998 chars. My wild assumption would be this: Message id should be able to be within one line of message and this limit is 998 chars. From message ids i saw during my time it's not that long. So from all the uncertainty i would say 1000 chars is very "safe" minimum range and like 2000 should cover any scenario if there is any kind of "structural overhead" of some data shape.
https://www.rfc-editor.org/rfc/rfc2822

Permutations of a Word on a Page

Forgive me for the lack of official phrasing; this is a problem given orally in class, as opposed to being written in a problem set. Using the English alphabet with no spaces, commas, periods, etc (and thus only working with twenty-six letters possible), how many possible orderings are there of a string of fifty characters that contain the combination "Johndoe" at some location in the set?
Edit: was a little quick to answer, and overlooked something very obvious. Please see the new answer below
This is more suited for something like math or stats stackexchange. Having said, that, there are 26^(50-7)*(50-7) combinations. To see why, ask yourself: how many 50 letter permutations of the 26 letters exist? Now, we will reduce this set by adding the restriction that a 7-letter contiguous word must exist within any candidate permutation. This has the effect of "fixing" 7 letters and making them unable to vary. However, we can place this 7 letter string anywhere, and there are 43 positions to place it ("johndoe" at position 0, "johndoe" at position 1, all the way to position 43, since "johndoe" will not fit at position 44).

SQL CONVERT STRING

I do struggle with some of these conversions, so I do apologize, I have asked a similar question in the past, but just can't get my head around how to achieve this.
So a value of 50.00 is currently being exported into the following format -
000000000000050000
A value of 25.99 would look like
000000000000025990
This is a 18 character field where any leading characters are padded with a zero.
What I am trying to do is convert that to a 19 character string - still with leading zeros - but the value of 50 or 25.99 is slightly different -
000000000000050000 becomes 0000000000000005000
000000000000025990 becomes 0000000000000002599
Any help would be greatly appreciated.
You would appear to want:
select '00' + left(str, 17)
This is a very strange format. Perhaps you should consider using numeric/decimal, which can accurately represent the number.
A lot of assumptions go into this answer, but...
SELECT '00'+LEFT(OriginalField, 17)
That would truncate your original 18th character and simply put two more zero's on the front.
The solution is not so simple if you need to potentially round up the 17th character.

How to do Norvig spell check for chinese characters mixed with english letters?

I have a list of product names written in mixture of English letters and numbers and Chinese characters stored in my database.
There is a table called products with the fields name_en, name_zh amongst others.
E.g.
AB 10"机翼
Peter Norvig has a fantastic algorithm for spell check but it only works for English.
I was wondering if there's a way to do something similar for a a narrow list of terms containing Chinese characters?
E.g. of mispelling such as
A10机翼
AB 10鸡翼
AB 10鸡一
AB 10木几翼
all will prompt AB 10"机翼 as the correct spelling
How do I do this?
You have a much more complex problem than Norvig's:
Chinese Input-method
The mis-spellings in your case (at least in your example) is mostly caused by the pinyin input method. One same typing of "jiyi" (English: airplane wings) could lead to different Chinese phrases:
机翼
鸡翼
鸡一
几翼
Chinese Segmentation
Also in Chinese to break up a long sentence into small tokens with semantic meaning, you would need to do segmentation. For example:
飞机模型零件 -> Before segmentation
飞机-模型-零件 After segmentation you got three phrases separated by '-'.
Work on the token-level
You probably can experiment starting from a list of mis-spellings. I guess you can collect a bunch of them from your user logs. Take out one misspelling at a time, using your example:
AB 10鸡翼
First break it into tokens:
A-B-10-鸡翼
(here you probably need a Chinese segmentation algorithm to realize that 鸡翼 should be treated together).
Then you should try to find its nearest neighbor in your product db using the edit distance idea. Note that:
you do not remove/edit/replace one character at a time, but remove/edit/replace one token at a time.
when edit/replace, we should limit our candidates to be those near neighbors of the original token. For example, 鸡翼 -> 机翼,几翼,机一
Build Lucene index
You can also try to tackle the problem in a different way, starting from your correct product names. Treat each product name as a document and pre-build lucene index from that. Then for each user query, the query-matching problem is converted to a search problem in which we issue a query to the search-engine for find the best-matching documents in our db. In this case, I believe Lucene would probably takes care of the segmentation (if not, you would need to extend its functionality to suit your own needs) and tokenization for you.

Reading REAL's from file in FORTRAN 77 - odd results

I'm currently messing around in FORTRAN 77 and I've ran into a problem that I can't seem to figure out. I'm trying to read from a file that looks similar to below:
000120 Description(s) here 18 7 10.15
000176 Description(s) here 65 20 56.95
...
The last column in each row is a monetary amount (never greater than 100). I am trying to read the file by using code similar to below
integer pid, qty, min_qty
real price
character*40 descrip
open(unit=2, file='inventory.dat', status='old')
read(2, 100, IOSTAT=iend) pid, descript, qty, min_qty, price
100 format(I11, A25, I7, I6, F5)
Everything seems to be read just fine, except for the last column. When I check the value of price, say for example, for the second line; instead of getting 56.95 I get something like 56.8999999999.
Now, I understand that I might have trailing 9's or whatnot because it's not totally precise, but shouldn't it be a little closer to 95 cents? Maybe there's something that I'm doing wrong, I'm not sure. Hopefully I'm just not stuck with my program running like this! Any help is greatly appreciated!
Is that exactly the code you use to read the file? Do you have "X" formats to align the columns? Such as (I11, A25, 2X, I7, 3X, I6, 3X, F5) (with made up values). If you got the alignment off by one and read only "56.9" for "56.95", then floating point imprecision could easily give you 56.89999, which is very close to 56.9
You could also read the line into a string and read the numbers from sub-strings -- this would require only precisely identifying the location of the string. Once the sub-strings contained only spaces and numbers, you could use a less-finicky IO directed read: read (string (30:80), *) qty, min_qty, price.

Resources