sprockets Learn to keyframe animate chains of bones. Gerald's 2024 Advent Calendar! The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen Character animation exercise by Steve Shelton an Animated Puppet Parody by Mark R. Largent Sprite Explosion Effect with PRJ included from johnL3D
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

Recommended Posts

  • Hash Fellow
Posted

New study says AI can boost programmer productivity... but maybe not.

 

Sabine Hossenfelder is a physicist who comments on science issues

 

Posted

It does not, especially not quality vise... AI learns from code which was created by AI more and more and like that they get worse and worse because AI is just going to "claim" something to be right, even so it is wrong.
The more code is produced by AI and the help of AI the worse the code gets and the worse is the learning base for AI systems.

Best regards
*Fuchur*

Posted

AI is not the big productivity booster the talking heads want you to believe it is. I've not only been using Ai to build code, but also developing AI models in my own software. In my experience Ai generated code (javascript and java) is buggy at least 50% of the time. It's great to spark an idea, but you have to know what you are doing ahead of time in order to fix the code created by the llm.

  • Admin
Posted

I've found AI code generation useful for exploring ideas.

I'm focusing on python because that is the primary coding language ChatGPT uses so that seems to be a good place to start not to mention adding in external libraries is fairly trivial.  Later if the python code works well the code can be translated into other programming languages.

So the basic idea is to construct 'demos' of a program that does specific processes.

Later that code can be referred to to plus up and iterate.

 

For example, a few recent programs I used AI to build include:

- A screen recorder that outputs to PNG sequence.  

The idea here was to leverage the program Opentoonz to edit and otherwise manipulate the PNG sequence

The output could be some other format but I wanted to explore that aspect.

- A utility to extract embeded materials out Animation:Master project files.

Initially the program was to extract models out of project files but I thought materials might be easier to troubleshoot/refine.

Also added ability to simply copy .mat files from an entire drive into a directory

Also added an option to move duplicate materials into a subdirectory (uses files size and then content of the file to determine duplicates)

 

- I've created a number of simple drawing programs to test various ideas.

Played with creating settings file (.ini) loading brushes (unique brush formats but also mypaint brushes... just used the size and dab distance though as the fancy brush attributes would be work and this was just the initial demo)

Here's where playing with AI driven code gets intersting because as the program gets more complicated it can be easier to start over from scratch and build a new program rather than continually tweak a program.  ChatGPT has recently gained a 'canvas' that allows editing/addressing specific parts of text/code so that may help in cases where code gets too long and responses time out.

 

- I've created demos of programs to create PDF files and multipage TIF images (from user selections)

Had fun exploring playback of PDF and TIF with an FPS slider to gain speeds not normally used for those formats... so leveraging PDF/TIF formats as animation files.  File menu option to output to GIF animation which allows those to quickly be saved with different timings.

- Created various demos using OpenCV to identify and extract faces out of images

Leveraged OpenCV to perfrom quite a few different processes to include cartoonizing images, brightness/contrast adjustment, inversion... most of these are fairly trivial to incorporate

- Pin/Grid distortion of imagery (move a pin/dot and save out current image with distortion

- Auto color fill demo to explore the idea of four color theorum to better understand how automatically coloring a series/sequence of images might be achieved

- FFMPEG / Command Runner - program that allows a list of commands to be indexed to control running order and activated/inactivated as well as tweaked and ran individually if desired.  

- Local ChatGPT interface program that logs prompts and responses (uses ChatGPT API which isn't free)

Many other tests and explorations such as extracting preview images/icons out of Animation:Master files, burning SRT (subtitle) scripts into image sequences... creating the frames based timing of those subtitles, grid creator... other things I've forgotten.

I recently created a VCGI to SVG converter to aid in saving vector images out of the (Beta) VGC Illustration program and had the author of that program (Boris) made suggestions for improving the work-in-progress code to convert from SVG back to VGCI.  Boris knows the VGCI format because he's the author of that format... insights AI wouldn't know.  While the VGC program will surely gain that SVG output some day it's been nice to not have to wait for that to be added.

I have done some C++ and other things with AI but for the most part I've mostly been using it to explore ideas (using python and tkinter for the GUI).

So, my experience... that of a non programmer... the main drawback to AI generated code from my experience is my own lack of knowledge and a better  understanding what prompts to use get the best output.

It can help to think of AI as an assistant that won't get everything right and not expect it to do the entire job for us. 

So, for what its worth, count me as a big fan of AI generated programming.  :)

 

 

 

  • Admin
Posted

Not sure this is a good example but here's a quick example of a no frills vector drawing program:

 

image.png

Initially, I just had ChatGPT create a vector drawing program with SVG compatibility as that would be the primary format images are saved in.

The result was a overly simple drawing program that only produced straight lines.

So... end of first iteration.

New prompt was to allow the user to draw as if drawing in a raster drawing program.

Much better.

Next iteration:  Let's add Undo/Redo

Done.

Let's make sure we have shortcut keys for Undo/Redo and Save

Done.

And that's where this quick demo is at this moment.

No code written by me as of yet... just prompting.

 

  • Admin
Posted

Next, we add a File menu option to load SVGs.

 

At this point I find it very useful to add a signature option that I tend to put in all my python programs that have GUIs; a restart/reload option.

This saves a lot of time in relaunching the program with updated python code so that we don't have to constantly go to the command line.

We'll have the program make note of what image was last saved so that it will restart with that image already loaded.

 

Next, let's add a basic ability to change the brush/vector line color (we'll refine that capability and make it more user friendly later).

image.png

 

Note that here (at the stage of adding color brush option) is where I got my first 'bug' and the program wouldn't run.

I just copy/paste the error from command line into the prompt and the problem is fixed on first try.

Next up... we'll add a brush size slider so the user can get some variation in line thickness.

  • Admin
Posted

In adding a few more options I realize we really need a File > New feature so that we can start from a blank canvas.

Other ideas tend to be obvious and they present themselves easily and a recommendation might be to have a piece of paper handy and jot down thoughts as they come to you.  For instance, a recently saved file listing would be nice so that those images can quickly be reloaded.

Color Fill is certainly an obvious feature request but also one that might be worth considering carefully depending on how we want to advance the program.  For instance, will this program be for illustrations?  Animation?  Our approach to color fill might be very different.

An Eraser!  Gotta have that.

(actually ChatGPT presented that as an option for possible improvement of the program)

But... line thickness slider; implemented.  Although we needed to refine a little to get smooth lines versus bristles (basically centerline versus outline methodologies).  The end caps we'll have to account for those as well.

So yeah.  Brush size slider... rounded endcaps.  

Done.

image.png

  • Admin
Posted

Now, here's where we start to get 'user friendly'.

We'll add a Help menu with a link to online documentation.

We'll even add another Help menu option so the user can edit that URL and provide their own link.

(This would likely be frowned upon by most program authors as you would normally want the link to go to a know/dedicated URL.  However, since the python code itself is easily editable... we'll just make it even easier to edit)

The editing of this URL might better be placed into a Preferences dialogue but that's for future consideration.

And what?  Pressure sensitivity for our drawing?  Those feature requests are really flowing in!

 

image.png

  • Admin
Posted

I almost outsmarted myself on this latest edition:  an option to update from the code online.

I've always wanted to 'code' that feature.

 

The difficulty here is not only making sure there is a date to check in the python script but (in testing) to make sure the local version has an older date than the online version so the successful update from github can be confirmed.  Also, to make sure if local is newer no update will happen etc.

Needs some more testing as there is a lot to account for but I think we've got it.

image.png

 

image.png

Also, added a Help > About where the user can verify the version/date of the current program/script they are using.

Posted

Well Rodney color me humbled. Great use of chatgpt! Your use of iterative prompts i think is the key to your success. Makes me want to redouble my efforts. The apps I'm working on are db designers and data quality workshops. The boring stuff.. but this app is going to licensed off at $145k per year to start so sometimes boring is worth it.

  • Hash Fellow
Posted

It seems to me that the advantage Rodney has in his project is that he already knows what a drawing program should do. He can instantly spot the shortfalls of the AI program and ask for a correction.

What if someone who didn't know what drawing program conventions are asked for a drawing program? How do they know the first version isn't the proper thing?

What if it's a goal where the correct result isn't immediately obvious when the program runs, like diagnosing a disease or aiming a space probe?

  • Admin
Posted

For safety's sake I have not committed myself to creating things like remote surgery software.

I'm pretty sure I should leave that for others.

Although... not knowing anything about the subject might actually be an advantage!

 

Precision is an ongoing concern for all things generative.

But that's really no surprise as that is pretty important even without AI.

 

 

  • Admin
Posted

Various (and partial) additions of Pen Pressure, Snapping to Stroke, Snapping to Grid (with adjustable grid) etc.

image.png

 

That this is vector strokes is quite interesting to me as it looks a lot more like raster.

Definitely need to work on the pressure sensitivity and this is where actually looking at (and understanding!) the code helps.

That is also where prompting can get more specific.

At times what I've done is manually edit the code and try things out and then share that code back with ChatGPT so that it knows what 'version' of a program we want to continue using.

 

Posted

Rodney you are building a marketable app there. Make it a saas and prepare to retire a gazillionaire.

 

Robert , using gen Ai to code preclude a prior knowledge of the problem and solution you are trying to solve. Let me tell ya story of what I ran into last fall. I interviewed for a position at a startup last November. IT was a company that was started by some ad executives who made his millions and got bored. So he decided he wanted to jump into the "season ai" business. This guy had zero clue what that meant. So he wanted to hire someone who could put together an app based on his ideas. I get in the interview call.. and one question I was asked was "I want to build an app for kids sports leagues that will use Ai to predict the weather and cancel events if necessary.  How would I do it" . Their premise being that Ai is magical and cheap. My answer was why use Ai? You already have the national weather service that has that info available via apis...no money spent. To build an Ai app would require months just to accumulate the data to train a model, then constant monitoring and retraining. To use that approach is just.plain silly. I didn't get the job.

 

Point being, eventhough media wants you to believe that Ai is magical and ready to take over the world are delusional. And if you think you can just jump in and build something without knowing what your doing is well silly. Rodney is what investors call a unicorn...he's making it happen. But I would wager he is doing so because he already knows who to build an app.

  • Admin
Posted

I do... (know how to build an app) and I don't.

I've been hovering around development since I first started using computers and little bits and pieces of obscure information would capture my fancy.

Running into something I'd never known before (but felt I surely should have) has always been fascinating to me.

Examples include the PPM/PGM image format... a text based image format that has been around for ages and @robcat2075 happened to mention it as being part of his raytracing studies.  I might have heard or read of it elsewhere but one day I ran across a practical use and I think it was ChatGPT only working with text based files/data.  How to create and manipulate images when they must be text based?  I knew of Base64 but that didn't seem practical but what is this PPM image format (and SVG wasn't quite what I was after) ?  Cool.  That worked.  And there are several thousand more crazy ideas where that comes from...

So yeah, I might have an advantage of some 30 years of endless curiosity around drawing and computers although without the skills and patience to do anything developmentally in programming but I find it is a bit more than that.  Processes like ChatGPT certainly depend on prior knowledge but that is what makes it useful and directable.  I've long been around programming but never been a programmer.  I'd say in most ways I still technically am not.

I can work/read through some C++, python and such but more often than not the logic of the code can easily elude me.

For me, that's where AI can help.

 

Perhaps the one thing that will stop an effort in its tracks with AI generated anything is where the scope is too broad.

ChatGPT will easily time out if a program is too complex (as in too many lines of code).

Learning workarounds to problems then is an ongoing concern with generative AI. 

 

Rodney... the unicorn... I really need to do something with that concept although I don't think I need AI to do that!  ;)

Posted

Rodeny, having been a spftware developer for close to 40 years, I have to say that even though you don't see yourself as one, the very description you gave is exactly that of the most talented developers out there. The ability to problem solve, to see a problem from different angles, and to use a level of imagination to solve a problem in code are skills that most of today's programmers simply don't have. I'd hire  someone with your skills over a "professional developer' any day of the week.

  • Like 1
  • love 1
  • Admin
Posted

Here's a quick try at creating 3D...

To keep things simple I just asked for 4 views:  Front, Top, Right and 3/4 view.

There is a single spline consisting of 3 control points which the user can adjust.

The views update as the control points are adjusted.

 

image.png

 

While there is much to be desired in this I was surprised it produced as successful a program as it did.

The primary failure at this point is that grabbing and manipulating the control points is too senstitive.  In my estimation the target area of the control points needs to be larger and when moved the distance moved should be more incremental (perhaps even controlled by grid units?)

This just a quick shot off the bow to see what AI would produce.

I don't have any specific goal in mind which at this stage might be a good thing as I don't have many preconcieved ideas.

I just want to see how well we can move that 3 point spline.

An interesting goal might be to get that center control point to always be on the spline (Hermite spline?).

Might need to research some of Martin Hash's old papers.   

 

Note that ChatGPT did suggest Three.js which is probably best for 3D webgraphics but I'm still firmly in python-land...

 

Added:  It kind of goes without saying that code for 3D programs can get complex really quickly!

  • Like 1
  • Admin
Posted

I returned to this 3D spline program for a moment as I wanted to implement something I never have tried before.

That is to have a text editor/viewer open adjacent to a working area so that either the working area can be adjusted or the text.

Little things like this could be useful especially where programs might highlight objects in the file that are currently selected in the viewer(s).

 

If considering Animation:Master files something like this might be useful for swapping out materials, models, actions on the fly where the text of the resource is in view in the editor and the result in the main workspace.

Ideally there would be very robust error checking to keep edits from breaking valid formatting.

But, yeah, I just wanted to see how that might work given side by side interactivity.

 

image.png

And no, the data is not accurate here.  I was in the process of adding Cubic Hermite Spline processing rather than Bezier and world space to unify the viewers to be more based on the control points location rather than control points based on viewer location etc. etc. etc.

 

I suppose the exercise served it's function:  to explore the general idea and learn more about what not to do next time.  ;)

Posted

Rodney how arenyou phrasing your prompts? Something like this is what my next step is in my app. I need to build in javascript a canvas that displays an etl work flow. For instance an object representing a source system then one representing a table then to target. In between are lines connecting each object. Click on the table and it expands on columns...clicks to column and see cleansing rules etc. I'm curious on how verbose you make your prompts to achieve your result's

 

  • robcat2075 changed the title to AI not helping programmers [or maybe it is?]
  • Admin
Posted
Quote

 

how arenyou phrasing your prompts?


 

I tend to keep prompts very simple although if I know something specific I want to use I will use that term.

For instance, I tend to use python a lot because I noticed ChatGPT uses python itself.

As such it might be good to use python initially to work out basic code and processes and then prompt to translate from python to the programming language of choice... in your case javascript.

 

Quote

Something like this is what my next step is in my app. I need to build in javascript a canvas that displays an etl work flow. For instance an object representing a source system then one representing a table then to target. In between are lines connecting each object. Click on the table and it expands on columns...clicks to column and see cleansing rules etc.

Quote

What I get out of that paragraph is:  "Let's build a javascript program that displays an ETL workflow.  This workflow should be displayed on a canvas.  Two primary objects are required to represent the source system and a table to target.  Each object should be connected with workflow lines.  The user can click on the table to expand columns and click on column headers to see "cleaning rules".

There is a lot of terminology I don't specifically know in that description but the important thing is that ChatGPT must know.

We might intially be more generic and just see what ChatGPT puts together without specific jargon and then later supply those specifics.

We might start with a simple table (3x3) and once successful expand to larger tables.

 

Quote

I'm curious on how verbose you make your prompts to achieve your result's

I tend to be very terse and consise in my prompting in order to better understand what changes from one prompt to the next.

If for instance I don't even have a table but other elements of the results are working I might prompt:  "Let's add a 3x3 table the user can enter text into."

The idea being to work from success to success and to better be able to take a step back if something isn't working.

 

ChatGPT will often make suggestions and in many cases those suggestions are useful to grab hold of and expand upon.

It might suggest using a specific library or algorithm... or we might ask what some best practices in <insert given area of interest here> are.

 

I tend to use 'Please... ' as in "Please create a C++ program that..." just to keep things light and to hedge against that fateful day when AI takes over the world.

Maybe they'll go easy on me.  ;)   (Just kidding.... I think!)

 

When the inevitable errors occur I usually copy and paste the error.  Often just stating "Error:  <copied and pasted error message here>"  ChatGPT then attempts to address the problem with varying levels of success.  Sometimes it gets it right the first time.  Sometimes we get a different error and iterate through that process again.  I'll mix this up with fuller sentences like "That change gives us an error that <insert pertinent info about error here>".

 

Any text based information is going to tend to be more useful that other forms of information although certain models will accept and interpret images.

This can save time when copy/pasting an error dialogue where we cannot copy the actual error text.

 

Adding debug output to console can help although often ChatGPT will add that into code automatically which is useful for programs that run from command line.  If using other languages we might add a specific error output to display or to text file/log.

I confess that I haven't created much in the way of javascript but I see no reason why that shouldn't produce useful results.

There may be cases where we want to create a program (in python etc.) which in turn outputs the desired code.

This would be particularly useful when dealing with formatted files in XML, JSON etc. as the program can then keep track of the file formatting and lighten the load on actual processing.

I must assume that any significant table/workflow presentation might use MySQL or something like that to store data.

Where the data is contained is important to place in the prompt at some point as it has to be stored somewhere.

 

 

 

 

 

  • Admin
Posted

Here's the no frills result of one html file and one javascript file based on my interpretation of your prompt:

image.png

The actual prompt:  ```Let's build a javascript program that displays an ETL workflow. This workflow should be displayed on a canvas. Two primary objects are required to represent the source system and a table to target. Each object should be connected with workflow lines. The user can click on the table to expand columns and click on column headers to see "cleaning rules".```

 

I've added the generated files here.

index.zip

  • Admin
Posted

A second prompt didn't produce anything worth mentioning but a followup to that produced a single HTML file that contains both the HTML and the javascript.

This following a suggestion from ChatGPT to expand to multiple sources and targets.

The result:

 

 

update.png

  • Admin
Posted

It is trivial to adjust the presentation of html/css graphics.

The following is from a very generic prompt:  ```Let's round the corners of the nodes to make the design/presentation more modern.```

I really do like the single file output although as the responses get more complex longer documents will tend to time out.

As such it might be good to channel specific functions into different files (CSS, JS, HTML).

Then we might prompt for updates to only one of those files.

 

 

 

modernroundedcorners.png

  • Admin
Posted

This might be a good time to suggest that AI generated 'stuff' isn't just useful...

It is often fun.  :)

 

Here I've adjusted our target to create an interactive Employee tree.

Where the fun begins is with the idea of... spritesheets.

I thought... why not?

And... why not create an HTML file that actually generates the spritesheet?  (I was too lazy to manually create a spritesheet to use in the demo!)

Then all we'd have to do is overlay/place an appropriate photo of our esteemed employees at the right location.

I didn't use my AI generated screen capture program for this but... here's a video demo.

I display the HTML spritesheet generator at the end.

 

All of this taking a few minutes to create.

 

  • Admin
Posted

Added a link via the text below the profile image that takes us to an employee page:

image.png

For those curious this isn't actually an exploration into Employee trees but rather the idea of projects and resources, download pages, etc.

Its disguised as an Employee tree because... why not?

Now, if I could only extra the Preview icons out of A:M files and automagicallyplace them into the spritesheets.  ;) 

Posted
6 hours ago, Rodney said:

tend to use 'Please... ' as in "Please create a C++ program that..." just to keep things light and to hedge against that fateful day when AI takes over the world.

Maybe they'll go easy on me.  ;)   (Just kidding....

I have to laugh at myself for doing the same thing Rodney.  Like being polite to the chat bot will yield better results.

Posted
7 hours ago, Rodney said:

must assume that any significant table/workflow presentation might use MySQL or something like that to store data.

Postgres..on aws. We use back end apis to post and get data from the db. All of the data including screen coordinates are housed in json and passed to the api.

 

Thanks for your input rodney..I have a lot to chew on this week in between investor meetings. And sorry for hijacking your thread!

Posted
6 hours ago, Rodney said:

Added a link via the text below the profile image that takes us to an employee page:

image.png

For those curious this isn't actually an exploration into Employee trees but rather the idea of projects and resources, download pages, etc.

Its disguised as an Employee tree because... why not?

Now, if I could only extra the Preview icons out of A:M files and automagicallyplace them into the spritesheets.  ;) 

What we need to do now is train chatgpt on am file structures and use it to generate character walk cycles!

  • Admin
Posted
Quote

What we need to do now is train chatgpt on am file structures and use it to generate character walk cycles!

I did create a GPT app called 'Animation Monster' where I uploaded the Tech Ref, TaoAM manual, and various files (empty project, material and model files if I recall correctly).  This was primarily as a test to see what the GPTs where capable of and I had some success but the GPTs don't have the same history saving feature as ChatGPT itself so I'd be hard pressed to recall the specific tests I ran.

 

That GPT can be accessed here:  https://chatgpt.com/g/g-ezATgMUbQ-animation-monster

I just now have the URL for animationmonster.org as well to chronicle anything of interest:  

I have often thought that refining processes such as applying BVH motions to a specific A:M rig would be useful and make that process easier.

  • Admin
Posted

Today's demo program is a magic erase tool.

image.png

I call this Magic Erase Background Removal but it need not be background. 

The user simply selects an area and flood fill identifies the area for removal.

 

Process

This took a little over 15 minutes to create as ChatGPT simply refused to implement the code to remove pixels correctly.

The reason the display above has red is a clue as to how I overcame the problem.

Instead of use the approach ChatGPT was trying to use I told it to try a different approach where we first fill the user selected area with Red (255, 0, 0) and then later use that red area to create the Alpha Channel's transparency.

An unexpected success was that I was planning to direct ChatGPT to add the ability to select mulitiple areas for removal.

The code generated already allowed that so I didn't even have to add that.

 

To do?:

- [ ] Add a pixel paint brush so the user can manually paint/mask areas.

- [ ] Add an Erase brush to accomplish some off the above but with anticipated user experience

- [ ] Add Erode and Dialate option to chip away at pixels or add outlines.

- [ ] Add Invert option (various options to output color separated RGBA channels)

- [ ] Allow for full color processing.  (Currently have not tested full color images to determine shortfalls).

- [ ] Add option to display a checkerboard pattern rather than red in the UI.  This should be trivial to implement but we do want to make sure the code uses the red rather than the checkerboard to create the alpha channel.

- [ ] Add standard file menu items such as reloading the program (as that is so important to me for iterating improvements to the program)

- [ ] Add a "debugging" option that saves a PNG image at each step of the process in sequential image format.  I've found this to be a useful feature when wanting to play back animated steps of the process and depending on what is being done by the program can produce some useful content.

- [ ] Add filtering processes to detect if an image is pure black and white and if not convert it for the purpose of this program.

- [ ] Add an option to process entire directories although note that locations of maskings will change from image to image so we need specical processes to allow for this.

 

 

acuterobotoutput.png

For anyone that wants to follow progress of this demo, make suggestions or improve the program, the code and todo list is on github:  https://github.com/OpenAnimationLibrary/extrastuff/tree/master/tools and utilities/magicerase

  • 2 weeks later...
  • Admin
Posted

Returning to this to post an example of something I had wanted to create a long time ago but only just now created with the help of ChatGPT.

A comic book cover organizational web page.

It's not quite the concept I originally had in mind as that would require an actual website and this is local html (and javascript) but it does the basics of that same process.

 

image.png

What we see here is three lists of comic book covers (it could be a list of comics in a want list, comics in current collection and comics ready to be sold/traded.

The cover entries in the list can easily be drag/dropped into a different column/list.

Referenced cover images can be sourced from local images or URLs (although note that if someone moves the image or a site isn't available that image link breaks).  The current content of this three list collection of data can be saved out in XML or JSON format and reopened so mulitple collections are easy to manage.

The image data is stored in base64 format in the XML so once saved the images can easily be recovered and used in other applications.

Ideally we'd add metadata (title, publisher, condition, pricing, description etc.) but I'd like to keep this simple so need to ponder that aspect. Perhaps clicking on the image displayed at top would take you online to some web link where that data is readily available and where that information is updated and maintained by others.

Regarding the idea of trades/sales I'd like to pursue that idea with a focus not on profiteering but rather to match people with resources in a way where participants are rewarded for keeping costs as low as possible.  Profiteering just drives the price and inavailability of comics upward when the goal would be to do the opposite.  For publishers the idea might be to allow direct access via a 'read me' link so those interested can actually read the comic.

At any rate, a fun exploration into the possibilities and I learned a few things about how to manipulate data and store it both in real time and via export/import.

The look and feel could certainly use an update but that's covered via the separate style.css file which could be modified with various themes (think dark mode) rounded image corners, scaling of the cover images while hovering, etc. 

 

For the curious, I have all of the comics in this screen capture except one.

  • 4 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...