Talking about .ZIP files support in SofaRun, below is a little video. It shows the browsing of big "TOSEC" zipped-ROM files (including one over 200MB in size and containing 600+ files) and the extraction of individual DSK/ROM images (also a complete game directory, the FRS-patched version of Psycho World).
Some comments:
- External applications in SofaRun (SUZ.COM and OPFSXD.COM here) are now launched in a much faster way.
- This has been recorded on a turboR, so browsing and unzipping is super-fast, but it works quite well too on a Z80 MSX.
- The final design in SofaRun will not be exactly the same. You'll be able to set options and launch games directly from the .ZIP file (using only the long file name).
Sofa spotted in screen reflection!
Looks to be working really nicely! What’s different in the way you run external applications?
Sofa spotted in screen reflection!
Noticed that too after posting the video, might try to convert a picture of this one to screen 2 for the next release
I'm now launching the .com files by first saving SofaRun RAM to allocated Dos 2 segments, then loading manually the .com file and calling 100h (to make it simple, there are some other subtilities to handle all .com files qnd the way they exit to dos). I then restore the SofaRun RAM and return the error code. This allows to call a .com file like a normal routine.
I was previously creating a .bat file, feeding the keyboard buffer with the name of the .bat file plus enter, and then exiting SofaRun, which was way slower (specially the creation of the .bat file on crowded disks/sdcards).
Ah… I wonder how involved that is.
Using _FORK and _JOIN too? Preparing FCBs for the first two parameters? And setting PARAMETERS environment variable?
It shows the browsing of big "TOSEC" zipped-ROM files (including one over 200MB in size and containing 600+ files)
Very impressive how fast you are loading the 600+ directory entries! Is the central directory loaded as one complete block, and do you have any limits here, or are you doing additional loadings while scrolling through the list?
Ah… I wonder how involved that is.
Using _FORK and _JOIN too? Preparing FCBs for the first two parameters? And setting PARAMETERS environment variable?
Exactly! Been a bit lazy and I'm not calling _FORK and _JOIN yet (SofaRun has no opened files, still wondering if that's required ??).
Other things to take care of:
- Be sure to disable the DOS error & abort handlers you have defined (an error in the called program will jump to an invalid address). Also, the DOS1 "exit" function first calls the abort handler you have defined before exiting (jp 0).
- Of course, you have to change the "jump" address located at 0 because some programs are also exiting with a "jp 0". Be sure to do things cleanly here and locate the called piece of code under the stack. My first tests were done with my "return routine" located at 0F975h, and I had a JP 0F975h at 0000h. The problem is that some programs (including the excellent VEDIT.COM), are reading what's in 0001h to determine the "stack top" location (thinking they can allocate up to 0F975h in this case!).
It shows the browsing of big "TOSEC" zipped-ROM files (including one over 200MB in size and containing 600+ files)
Very impressive how fast you are loading the 600+ directory entries! Is the central directory loaded as one complete block, and do you have any limits here, or are you doing additional loadings while scrolling through the list?
I'm using this approach for SofaRun:
- I first read all file offsets in the .ZIP file by parsing the central directory. I'm not reading the filenames or anything else, just the offsets. They are stored in a array of 32bits values.
- When displaying a page with filenames, I retrieve the file names & size in the ZIP file using the stored offsets. That's fast because there's only 20 entries to display.
First reading all file names in a row was not an option (too slow and would not fit in RAM).
Another interest with the offsets table is that advancing or going back in the file list can be done with just an addition or substraction (which can't be done easily with the real central directory because entries have a variable size).
That sounds like the optimal approach!
Btw, did you solve the problem with long directory names and the correct destination for their files?
Example:
The Central Directory contains these entries:
this is a long directory/
this is a long directory/file1
this is a long directory3/
this is a long directory/file2
this is a long directory/file3
this is a long directory3/fileA
this is a long directory3/fileB
As the directories have to be shorten to
this_i~1/
this_i~2/
how do you know where to put file1,2,3,A,B?
Do you store all mapping of long and short filenames?
(sorry for all the questions )
I do not handle long directory names for now (just long file names), I just truncate the directory names. As the directories and files may be stored at any order in the .ZIP file, you're true that this would require a mapping.
Even the long file names support is not as sophisticated as when you copy files on an SDCard from windows for example. There is no place to store these associations, so I'm just incrementing a counter at file creation time. Maybe we could imagine an extra hidden file in each directory keeping the long file names / short file names associations (the equivalent of what is stored in the directories on MS-DOS / Windows).
Not sure if I will do much changes in this direction, depending on how much complains I'll get