• Please review our updated Terms and Rules here

Viewing HTML on a S-100 machine with pictures.

----------------------
the evolution of the JPGtoFABGL SCRIPT

# Ver 0.5 12/1/2023
# By John Galt Furball1985

from PIL import Image
import sys, termios, tty, os, time

def getch():
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
tty.setraw(sys.stdin.fileno())
ch = sys.stdin.read(1)

finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
return ch


# python JPGTOFABGL % A T 0 0

arg1 = sys.argv[1] # filename
arg2 = sys.argv[2] # Auto, Semi, Manual mode
arg3 = sys.argv[3] # T transparency 'T' or not 'N'ot a 't'
arg4 = sys.argv[4] # X
arg5 = sys.argv[5] # Y
esc=chr(27)

FILENAME=arg1 #image can be in gif jpeg or png format
im=Image.open(FILENAME).convert('RGB')
pix=im.load()
w=im.size[0]
h=im.size[1]
YY=w
XX=h

# Automatic mode one image centered screen
if arg2=="A" or arg2=="a":
PS=512-YY
PS2=384-XX
PS=PS/2
PS2=PS2/2 # The Picture in the Center of the screen X: PS2 Y: PS
offsetx=PS
offsety=PS2


# Semi-Automatic mode user positions possible center,left,right,middle
if arg2=="S" or arg2=="s":
key = getch()

if key=="L" or key=="l": #image left-top
offsetx=128-(YY/2)
offsety=40

elif key=="R" or key=="r": #image right-top
offsetx=384-(YY/2)
offsety=40

elif key=="M" or key=="m": #image middle-top
offsetx=256-(YY/2)
offsety=40

elif key=="C" or key=="c": #image centered on screen
PS=512-YY
PS2=384-XX
PS=PS/2
PS2=PS2/2
offsetx=PS
offsety=PS2

elif key=="Q" or key=="q" or key==chr(27):
exit(0)

else:
exit(0)

# Manual Mode
if arg2=="M" or arg2=="m": # Honor user X,Y from Command prompt
offsetx=int(arg5)
offsety=int(arg4)

# OUTPUT TO FABGL TERMINAL IN COLOR
if w<=500 and h<=350: # Range check to make sure we don't go nuts 500 x 350 resolution
for i in range(w):
for j in range(h):

if arg3=="T" or arg3=="t":
# IMAGE IS TRANSPARENT

if pix[i,j][0]<>0 and pix[i,j][1]<>0 and pix[i,j][2]<>0:
print(esc+"[H")
print(esc+"_GPEN"+str(pix[i,j][0])+";"+str(pix[i,j][1])+";"+str(pix[i,j][2]))
print(esc+"_GPIXEL"+str(i+offsetx)+";"+str(j+offsety))

else:
# IMAGE IS NOT TRANSPARENT
print(esc+"[H")
print(esc+"_GPEN"+str(pix[i,j][0])+";"+str(pix[i,j][1])+";"+str(pix[i,j][2]))
print(esc+"_GPIXEL"+str(i+offsetx)+";"+str(j+offsety))


print (esc+"_F0;15")
print (esc+"_GPEN255;255;255")
----------------------------------------------------

the indents do not display correctly on the forums eventually this will be on my github

explanation:

Because of the way 'elinks' and scripts work you don't have interaction normally with them.

I have setup the script to work 3 ways.
1)Automatic
2)Semi-Automatic
3)Manual

there is also transparency support

you setup how you want the script to run from the elinks handler

set mime.handler.image_viewer.unix.program = "python jpgtofabgl % A N 0 0"

this set the script to automatic with no transparency, transparency is to treat the color black as solid or see through.

set mime.handler.image_viewer.unix.program = "python jpgtofabgl % A T 0 0"

this sets the script to automatic with Transparency

set mime.handler.image_viewer.unix.program = "python jpgtofabgl % M N 100 100"

this sets the script to manual no transparency and start the picture at 100,100

when in Automatic you can only display one picture located in the center of the terminal screen
when in Semi Automatic you can display pictures in 4 position. you will not be prompted but you can press L/l,R/r,M/m,C/c this will place 3 pictures at the top and 1 in the center of the screen depending on your keypress.
any mistake in keypress will exit back to the browser and you can try again.

this way its very customizable or ready to go.

i have to bug test it extensively but you can get an idea of what is going on.

python makes it REALLY hard to Pole the keyboard inside a loop basically they don't want unseen keyboard input since it can be used as a key logger.


in working with GIMP i'm now able to pre dither the photos and don't have to worry about any post processes.
i hope to put up a test page with jpg images and then play around with the new script.
 
The freeze frame machine I deigned and built required an interesting memory controller circuit. I wanted it to work for color video (still it all fit on a card about the same size as an S-100) For a PAL color frame you need to store 8 fields because of the unique burst phase on the 8 fields, 4 fields for color NTSC and 2 fields for monochrome. The design was simplified quite a lot by the use of the AL422B field store IC, a single one can store a whole field with space to spare.


A 4 field NTSC Color version could be done with just four AL422B's and a less complex memory controller. Some effects could be built onto it with software control from the S-100 bus to initiate image capture, and effects like negative picture and/or other effects.

Isn't full color high resolution capture a bit of an overkill for a S100 machine? Of which few had high resoltion color graphics cards? Also the display quality of the time was very limited, so why capture the whole screen? It was the 486 and the Video Blaster by Creative Labs that finally made full motion capture possible around '94 or so IIRC... Very early. But you're describing stuff a decade preceeding.

I'd love to head more of the thinking there.
 
the script i wrote works but the browser seems to randomly crash when i have it in semi mode.
it has to be the extra keyboard input.

so i set the script into full automatic where it justs places the image in the center of the screen(allows for max sized photos)
i also found out ELINKS has had some development.
i updated to version 0.14

I left a message on the github showing what i'm doing here.

maybe the idea will be integrated into the browser in the future.

here is a photo of this page. using semi automatic mode.(but it crashes the browser more often.)

https://johngalt01.github.io/truck.html

DSCN5983.JPG

the pictures are from the late 1990s from a very old website i used to have for r/c cars.


as a test i used GIMP with dithering set to match the FABGL ANSI Terminal color palette of 64 colors (4rx4gx4b)
the top right photo has no dithering and you can see how flat it looks.

the photo of the screen looks much harsher then it looks to your eye. dithering does make the photos look nicer and you can see more detail in the photos this way.

the smaller photos are 100X75 (5 min load) and the larger 150x113 (12 min load) tried to make them 4:3 ratio.
you can see how small photos in the browser are so large on the terminal.

the original photos size was 81x61 for thumbnails preview and 400x300 if you wanted to see the larger one.

a 400x300 picture would take about an hour to display on the terminal at 9600 baud.
 
Last edited:
i'm playing around with an automatic resizer
so any image is displayed smaller.

#automatic resize

image = Image.open(FILENAME)
image.thumbnail((100, 100))
image.save('TEMP.JPG')

im=Image.open('TEMP.JPG').convert('RGB')

the 100pixel width makes sure most pictures only take 5 minutes to load.

it works.


i found an automatic dithering library
 
Last edited:
i have not have luck getting the auto dithering to work because python is driving me insane with version issues.

this is the latest code:

this adds automatic resizing of picture and its configurable from inside the browser.

"python JPGTOFABGL % A N 0 0 100"

sets conversion to Automatic one image at screen center, Transparency off, X place holder(manual mode), Y place holder(manual mode), Resolution limiter that maintains aspect ratio.

now this has been pretty stable with version 0.14 of the ELINKS browser.

an issue is the resizer can only take jpgs other formats bomb out.




-------------------------------------------------------------

# Ver 0.61 12/3/2023
# By John Galt Furball1985

from PIL import Image
import sys, termios, tty, os, time

def getch():
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
tty.setraw(sys.stdin.fileno())
ch = sys.stdin.read(1)

finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
return ch


# python JPGTOFABGL % A T 0 0

arg1 = sys.argv[1] # filename
arg2 = sys.argv[2] # Auto, Semi, Manual mode
arg3 = sys.argv[3] # T transparency 'T' or not 'N'ot a 't'
arg4 = sys.argv[4] # X
arg5 = sys.argv[5] # Y
arg6 = sys.argv[6] # resize limit

esc=chr(27)

FILENAME=arg1 #image can be in gif jpeg or png format

#automatic resize

image = Image.open(FILENAME)
image.thumbnail((int(arg6),int(arg6)))
image.save('TEMP.JPG')

im=Image.open('TEMP.JPG').convert('RGB')
pix=im.load()
w=im.size[0]
h=im.size[1]
YY=w
XX=h

# Automatic mode one image centered screen
if arg2=="A" or arg2=="a":
PS=512-YY
PS2=384-XX
PS=PS/2
PS2=PS2/2 # The Picture in the Center of the screen X: PS2 Y: PS
offsetx=PS
offsety=PS2


# Semi-Automatic mode user positions possible center,left,right,middle
if arg2=="S" or arg2=="s":
key = getch()

if key=="L" or key=="l": #image left-top
offsetx=128-(YY/2)
offsety=40

elif key=="R" or key=="r": #image right-top
offsetx=384-(YY/2)
offsety=40

elif key=="M" or key=="m": #image middle-top
offsetx=256-(YY/2)
offsety=40

elif key=="C" or key=="c": #image centered on screen
PS=512-YY
PS2=384-XX
PS=PS/2
PS2=PS2/2
offsetx=PS
offsety=PS2

elif key=="Q" or key=="q" or key==chr(27):
exit(0)

else:
# exit(0)
PS=512-YY
PS2=384-XX
PS=PS/2
PS2=PS2/2
offsetx=PS
offsety=PS2

# Manual Mode
if arg2=="M" or arg2=="m": # Honor user X,Y from Command prompt
offsetx=int(arg5)
offsety=int(arg4)

# OUTPUT TO FABGL TERMINAL IN COLOR
if w<=500 and h<=350: # Range check to make sure we don't go nuts 500 x 350 resolution
for i in range(w):
for j in range(h):

if arg3=="T" or arg3=="t":
# IMAGE IS TRANSPARENT

if pix[i,j][0]<>0 and pix[i,j][1]<>0 and pix[i,j][2]<>0:
print(esc+"[H")
print(esc+"_GPEN"+str(pix[i,j][0])+";"+str(pix[i,j][1])+";"+str(pix[i,j][2]))
print(esc+"_GPIXEL"+str(i+offsetx)+";"+str(j+offsety))

else:
# IMAGE IS NOT TRANSPARENT
print(esc+"[H")
print(esc+"_GPEN"+str(pix[i,j][0])+";"+str(pix[i,j][1])+";"+str(pix[i,j][2]))
print(esc+"_GPIXEL"+str(i+offsetx)+";"+str(j+offsety))


print (esc+"_F0;15")
print (esc+"_GPEN255;255;255")
 
ok this fixes the resizer issue with file extensions so it will work on other file types besides JPG.

i''m starting to hate how python makes everything overly complicated.

link to the code

-------------------------------------

# Ver 0.62 12/3/2023
# By John Galt Furball1985

#from pathlib import path
from PIL import Image
import sys, termios, tty, os, time

def getch():
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
tty.setraw(sys.stdin.fileno())
ch = sys.stdin.read(1)

finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
return ch


# python JPGTOFABGL % A T 0 0 100

arg1 = sys.argv[1] # filename
arg2 = sys.argv[2] # Auto, Semi, Manual mode
arg3 = sys.argv[3] # T transparency 'T' or not 'N'ot a 't'
arg4 = sys.argv[4] # X
arg5 = sys.argv[5] # Y
arg6 = sys.argv[6] # resize limit

esc=chr(27)

FILENAME=arg1 #image can be in gif jpeg or png format

extension = os.path.splitext(FILENAME)[1]

#automatic resize

image = Image.open(FILENAME)
tempimage='temp'+ extension
image.thumbnail((int(arg6),int(arg6)))
image.save(tempimage)

im=Image.open(tempimage).convert('RGB')
pix=im.load()
w=im.size[0]
h=im.size[1]
YY=w
XX=h

# Automatic mode one image centered screen
if arg2=="A" or arg2=="a":
PS=512-YY
PS2=384-XX
PS=PS/2
PS2=PS2/2 # The Picture in the Center of the screen X: PS2 Y: PS
offsetx=PS
offsety=PS2


# Semi-Automatic mode user positions possible center,left,right,middle
if arg2=="S" or arg2=="s":
key = getch()

if key=="L" or key=="l": #image left-top
offsetx=128-(YY/2)
offsety=40

elif key=="R" or key=="r": #image right-top
offsetx=384-(YY/2)
offsety=40

elif key=="M" or key=="m": #image middle-top
offsetx=256-(YY/2)
offsety=40

elif key=="C" or key=="c": #image centered on screen
PS=512-YY
PS2=384-XX
PS=PS/2
PS2=PS2/2
offsetx=PS
offsety=PS2

elif key=="Q" or key=="q" or key==chr(27):
exit(0)

else:
# exit(0)
PS=512-YY
PS2=384-XX
PS=PS/2
PS2=PS2/2
offsetx=PS
offsety=PS2

# Manual Mode
if arg2=="M" or arg2=="m": # Honor user X,Y from Command prompt
offsetx=int(arg5)
offsety=int(arg4)

# OUTPUT TO FABGL TERMINAL IN COLOR
if w<=500 and h<=350: # Range check to make sure we don't go nuts 500 x 350 resolution
for i in range(w):
for j in range(h):

if arg3=="T" or arg3=="t":
# IMAGE IS TRANSPARENT

if pix[i,j][0]<>0 and pix[i,j][1]<>0 and pix[i,j][2]<>0:
print(esc+"[H")
print(esc+"_GPEN"+str(pix[i,j][0])+";"+str(pix[i,j][1])+";"+str(pix[i,j][2]))
print(esc+"_GPIXEL"+str(i+offsetx)+";"+str(j+offsety))

else:
# IMAGE IS NOT TRANSPARENT
print(esc+"[H")
print(esc+"_GPEN"+str(pix[i,j][0])+";"+str(pix[i,j][1])+";"+str(pix[i,j][2]))
print(esc+"_GPIXEL"+str(i+offsetx)+";"+str(j+offsety))


print (esc+"_F0;15")
print (esc+"_GPEN255;255;255")

------------------------

the point of this is to make sure if your clicking on the links that you don't accidently click on a 100meg file with 4K resolution and then the you spend 15 days watching it draw a fragment then crash.

ATARI site from yesterday now with the resizer addition.

DSCN5984.JPG
 
Isn't full color high resolution capture a bit of an overkill for a S100 machine? Of which few had high resoltion color graphics cards? Also the display quality of the time was very limited, so why capture the whole screen? It was the 486 and the Video Blaster by Creative Labs that finally made full motion capture possible around '94 or so IIRC... Very early. But you're describing stuff a decade preceeding.

I'd love to head more of the thinking there.
What I described the an NTSC 4 field store, would probably be too fancy and "modern" for the era of my SOL-20, I agree.

But I think if you can make the board period correct from parts of the SOL's era, that is good enough, even if it worked better than the Dazzler in high res mode, or the Matrox ALT-512's 4 shades of grey.

In my design the AL422 is the main modern IC that is anachronistic with respect to S-100 technology, most of the others were TTL's and some analog parts from the 80's era. There were A/D and D/A converter chips in the late 70's that were up to the task of a field or frame capture.

To keep it period correct, it could be shrunk down to a much smaller memory requirement, and just monochrome would do, 16 shades of grey, one data byte could serve two pixels, and a single field capture only (this also completely avoids motion artifacts), I think I can do this with all pre 1980 components on an S-100 board, probably need 32k of memory (have to check). If that required too much memory, I would simply go to 4 shades of grey and one byte could serve 4 pixels each with 4 shades.

Tinkering around with monochrome video, I was surprised how good a 4 shade of grey image looked (not as good as 16 or 64 obviously) There are some 4 shade images here in the the Matrox article, page 19.


The ALT-512 had 256 x 240 pixels and because of its two display planes 4 shades of grey and that used up 15360 memory bytes to hold that or basically one 16k memory card in a SOL.

In any case for an S-100 video capture card, I would make it from all period correct parts, like I have for all of my S-100 card projects, so far.
 
What I described the an NTSC 4 field store, would probably be too fancy and "modern" for the era of my SOL-20, I agree.

But I think if you can make the board period correct from parts of the SOL's era, that is good enough, even if it worked better than the Dazzler in high res mode, or the Matrox ALT-512's 4 shades of grey.

In my design the AL422 is the main modern IC that is anachronistic with respect to S-100 technology, most of the others were TTL's and some analog parts from the 80's era. There were A/D and D/A converter chips in the late 70's that were up to the task of a field or frame capture.


They were up to the task alright... Of being expensive! I remember how crazy expensive they were... I always wanted to build myself a portable digital oscilloscope in 1986/87 and the DAC made it impractical for most applications... 10 Msps would have been enough for my needs, and 20Msps would have been fantastic. Most of the low cost DACs of the era were crazy expensive though... I recall wishing for more.

To keep it period correct, it could be shrunk down to a much smaller memory requirement, and just monochrome would do, 16 shades of grey, one data byte could serve two pixels, and a single field capture only (this also completely avoids motion artifacts), I think I can do this with all pre 1980 components on an S-100 board, probably need 32k of memory (have to check). If that required too much memory, I would simply go to 4 shades of grey and one byte could serve 4 pixels each with 4 shades.

Oh, now that idea I really like. You could do 4 shades ( or 3 shades plus black level ) with cheap fast comparitors and it's a very simple circuit with a digital output... And you could have four twiddle knobs to set them wherever you needed them and even a fifth for overall gain... That would be super-fast and super-easy... And of course, everyone loved knobs in the 80s even if they messed things up more than they solved them. Speaking of which I haven't seen a graphic equaliser in ages but there's the ultimate in knob technology. :)

And you could also have an adjustable start and adjustable length, allowing for "zooming" in on a part of the image and changing resolutions, or overscan correction. It would need a clock phase adjustment capability too, but might not need a crystal... A free-running oscillator would be enough if there was sync detection and while it would change with temperature, it would be consistent enough line to line even if it took a little while to start.

And the RAM could just be a small static ram that is multiplexed since it doesn't need to be real time. 16K would be sufficient. Or two 64K 1 bit SRAMS.

Now you got me going on liking the idea. That's something I could have made back in the 80s even with my limited experience back then.

I've seen some super-simple circuits that did dithering of the image in real time and could display semi-live liveo on a Spectrum computer, so it's within the range of what a z80 can handle. It lowered the effective resolution a lot though, but IIRC, people mainly used them to digitise faces or outlines.

The simplest circuit I've seen though is this one;

4 frames/sec video example 1 bit output -
RC2014 PCB - https://github.com/ZXQuirkafleeg/ZX-Videoface
Schematic



Tinkering around with monochrome video, I was surprised how good a 4 shade of grey image looked (not as good as 16 or 64 obviously) There are some 4 shade images here in the the Matrox article, page 19.


The ALT-512 had 256 x 240 pixels and because of its two display planes 4 shades of grey and that used up 15360 memory bytes to hold that or basically one 16k memory card in a SOL.

In any case for an S-100 video capture card, I would make it from all period correct parts, like I have for all of my S-100 card projects, so far.

I was impressed when I read your document. It was remarkably good compared to the 1 bit example above.
I never had such a card back in the day. The VideoBlaster was the first decent card I had and it had video overlay as well as capture and could write a MPG in real time. Very low resolution, but it worked... It's incredible how far that technology has progressed.
 
I was impressed when I read your document. It was remarkably good compared to the 1 bit example above.
I never had such a card back in the day. The VideoBlaster was the first decent card I had and it had video overlay as well as capture and could write a MPG in real time. Very low resolution, but it worked... It's incredible how far that technology has progressed.
It is really interesting how good a 1 bit image can look (as in the video you posted), if the drawing is done right, It is somewhat analogous to a newspaper print, where it is just black dots on white, yet in the right proportions can look very realistic. Part of it is the way the human visual system picks up on clues just from simple line drawings, like cartoons and a lot of information can get conveyed with a minimal amount of data, like the expression on a face, angry, sad, happy, surprised etc, just with a very limited number of drawn lines.

Now you have given me some ideas too, especially about making a simplified A/D converter if I went 4 grey levels. One thing is to line lock the divided down pixel clock to the incoming video, it is easy to do as I did in the freeze frame machine, and that can be done with very few period correct parts. Depending how fast I could get a screen refresh with a reasonable look, it could be made to accept a standard monochrome video signal (or NTSC color with the sub-carrier filtered off) and replay it as a 4 shade image with 1 to possibly 3 or 4 fps video. Which I think would look very good especially if the original material was a cartoon.

Talk about mission creep, we went from single frame capture to moving video !

(but I guess it is a little off the topic of the thread so apologies for that)
 
woooohooooo

i got the Dithering working

i can now process a normal web photo and control the color palette to 64 colors for the FABGL terminal.
and control the image size for the fabgl terminal all from inside the ELINKS browser.

-----------------------------------
# Ver 0.63 12/3/2023
# By John Galt Furball1985


from PIL import Image
import sys, termios, tty, os, time

def getch():
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
tty.setraw(sys.stdin.fileno())
ch = sys.stdin.read(1)

finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
return ch


# python JPGTOFABGL % A T 0 0 100

arg1 = sys.argv[1] # filename
arg2 = sys.argv[2] # Auto, Semi, Manual mode
arg3 = sys.argv[3] # T transparency 'T' or not 'N'ot a 't'
arg4 = sys.argv[4] # X
arg5 = sys.argv[5] # Y
arg6 = sys.argv[6] # resize limit

esc=chr(27)

FILENAME=arg1 #image can be in gif jpeg or png format

extension = os.path.splitext(FILENAME)[1]

#automatic resize

image = Image.open(FILENAME)
tempimage='temp.png'

image.thumbnail((int(arg6),int(arg6)))

imageout = image.convert("P",dither=Image.FLOYDSTEINBERG, palette=Image.WEB, colors=64)
imageout.save(tempimage)

im=Image.open(tempimage).convert('RGB')

pix=im.load()
w=im.size[0]
h=im.size[1]
YY=w
XX=h

# Automatic mode one image centered screen
if arg2=="A" or arg2=="a":
PS=512-YY
PS2=384-XX
PS=PS/2
PS2=PS2/2 # The Picture in the Center of the screen X: PS2 Y: PS
offsetx=PS
offsety=PS2


# Semi-Automatic mode user positions possible center,left,right,middle
if arg2=="S" or arg2=="s":
key = getch()

if key=="L" or key=="l": #image left-top
offsetx=128-(YY/2)
offsety=40

elif key=="R" or key=="r": #image right-top
offsetx=384-(YY/2)
offsety=40

elif key=="M" or key=="m": #image middle-top
offsetx=256-(YY/2)
offsety=40

elif key=="C" or key=="c": #image centered on screen
PS=512-YY
PS2=384-XX
PS=PS/2
PS2=PS2/2
offsetx=PS
offsety=PS2

elif key=="Q" or key=="q" or key==chr(27):
exit(0)

else:
# exit(0)
PS=512-YY
PS2=384-XX
PS=PS/2
PS2=PS2/2
offsetx=PS
offsety=PS2

# Manual Mode
if arg2=="M" or arg2=="m": # Honor user X,Y from Command prompt
offsetx=int(arg5)
offsety=int(arg4)

# OUTPUT TO FABGL TERMINAL IN COLOR
if w<=500 and h<=350: # Range check to make sure we don't go nuts 500 x 350 resolution
for i in range(w):
for j in range(h):

if arg3=="T" or arg3=="t":
# IMAGE IS TRANSPARENT

if pix[i,j][0]<>0 and pix[i,j][1]<>0 and pix[i,j][2]<>0:
print(esc+"[H")
print(esc+"_GPEN"+str(pix[i,j][0])+";"+str(pix[i,j][1])+";"+str(pix[i,j][2]))
print(esc+"_GPIXEL"+str(i+offsetx)+";"+str(j+offsety))

else:
# IMAGE IS NOT TRANSPARENT
print(esc+"[H")
print(esc+"_GPEN"+str(pix[i,j][0])+";"+str(pix[i,j][1])+";"+str(pix[i,j][2]))
print(esc+"_GPIXEL"+str(i+offsetx)+";"+str(j+offsety))


print (esc+"_F0;15")
print (esc+"_GPEN255;255;255")

-------------------

full size images test page:

https://johngalt01.github.io/truck.html

script will now scale the images to a 100 pixels wide while holding the aspect ratio and add dithering for the 64 colors available on the FABGL terminal.

user can control how large or small to make the images from with in ELINKS settings.

JPGTOFABGL python script is updated on the github.
 
test website:


loading on the fly with newest script. ELinks settings "python jpgtofabgl % A N 0 0 100" Automatic center image, transparency off, X,Y(disabled), image display size 100 width maintain aspect ratio, Dithering image.
approx 5 minute picture load.

DSCN5985.JPG





loading on the fly with newest script. ELinks settings "python jpgtofabgl % A N 0 0 200" Automatic center image, transparency off, X,Y(disabled), image display size 200 width maintain aspect ratio, Dithering image.
approx 17 minute picture load.
DSCN5986.JPG

change to resolution made inside ELINKS
 
You can get a Wifi to Serial dongle for around $10 which will let you host a server or open connections either TCP or UDP to another server, all controlled by the serial port. If you have a spare serial port, it would let you add a basic Web Server directly to the S100 machine, which would give you a browser.

You would need to decide the JPGs and GIFs on a z80 system, but it would get you closer to where you seem to be headed?

Though I don't know Python so perhaps thats a bigger jump from a modern language to an older one than I realize. Though there's some code available for the Amstrad I think to decode GIFs at least :)
 
I didn't know python either I just started to mess with it this week.

I basically got everything i wanted.

anyone can follow in my footsteps, improve it.
 
I am not sure I understand the setup... please tell me if I am wrong:

You are using an S100 machine with two serial ports. The first serial port is connected to an ESP32 with FabGL, to serve as the terminal interface. The second serial port is connected to a Raspberry Pi, using a terminal program (either using a WiFi modem and connecting through IP, or a direct serial connection). The Raspberry Pi runs the web browser and renders the output into FabGL codes, and FabGL displays them using VGA.

Why can't you just connect the FabGL to the Raspberry Pi, without the S100 in the middle?
 
you could. you could go even further today and put everything on a ESP32 have it emulate a altair with the ansi terminal then just jack in a wifi modem card.


Most people are going to have a S-100 machine with some kind of serial terminal Vt-100 capable. so why not use what the terminal is capable of.
at the same time you really can't use a modem anymore so your going to use a WIFI modem and telnet into a bbs.

so i'm using a S-100 machine with terminal software IMP/kermit to log into my raspberry pi over telnet wifi modem, where i do some compiling work and that allows me to run a text based browser.

now i could also have ELINKS on a TCP connection and log right to the browser but getting into a telnet session allows me to do other things, and i still have my MP/M Net2 running on my raspberry pi

using said browser your limited to text only sites, and now you are not.

You can view images taking advantage of the terminal capabilities.

its just like a video card you always wanted to swap out for the bigger better card with more ram no different with a terminal.

and just like back in the day when you logged into CompuServe, AOL, or prodigy you took advantage of server side applications as a client. or when you played a MUD on a bbs.

now you can enjoy browsing the web on your old machine and see pictures too :)


i started to test with ELINKS 0.16 also,,, 0.17 gave me problems, but its a big update from 0.13 and 0.14.
development still ongoing on github
 
Last edited:
Sure, the ESP32 could emulate the whole thing and run a TCP/IP stack while computing Pi in its idle cycles... that's not why I am confused.

But I see no reason for the old machine in your description, not even a superficial one. If I understand your setup correctly, you have reduced the function of the old machine to being a passive wire. Unplug both serial devices, connect them to each other with a passive adapter (Rx, Tx, GND and maybe a few handshaking lines), completely bypassing the S100, no loss of functionality?
 
Why use a computer to write a book? when you have pen and paper, no loss of functionality.

Take it or leave it, you have the option.
 
Why use a computer to write a book? when you have pen and paper, no loss of functionality.
Exactly. Why use a computer to write a book with pen and paper?

But thanks for confirming that I did not misunderstand your project.
It is actually interesting, just not for an S100 system (in my opinion).
 
Back
Top