sprockets frosted donut medals buildings Rubik's Cube Nidaros Cathedral Tongue Sandwich A:M Composite
sprockets
Recent Posts | Unread Content | Previous Banner Topics
Jump to content
Hash, Inc. - Animation:Master

Rodney

Admin
  • Posts

    21,643
  • Joined

  • Last visited

  • Days Won

    118

Everything posted by Rodney

  1. The updated Watchfolder script: import os import time import shutil import keyboard import subprocess import re from datetime import datetime import configparser import sys # === DEFAULT CONFIG === DEFAULTS = { "watch_dir": "F:/renderfolder", "ffmpeg_path": "ffmpeg", "framerate": "24", "timeout": "5", "video_basename": "video", "max_runtime_minutes": "0", "reset_timeout_on_video": "false", # GIF options "make_gif": "true", "gif_max_seconds": "6", # 0 = full video "gif_width": "320", "gif_fps": "12", "gif_suffix": "_thumb" } def get_exe_dir(): if getattr(sys, 'frozen', False): return os.path.dirname(sys.executable) else: return os.path.dirname(os.path.abspath(__file__)) INI_FILE = os.path.join(get_exe_dir(), "watchfolder.ini") # ----------------- Config ----------------- def load_config(): config = configparser.ConfigParser(inline_comment_prefixes=("#", ";")) if not os.path.exists(INI_FILE): # write a friendly ini with comment lines (not inline) config["settings"] = DEFAULTS with open(INI_FILE, "w") as f: f.write( "[settings]\n" "watch_dir = F:/renderfolder\n" "ffmpeg_path = ffmpeg\n" "framerate = 24\n" "timeout = 5\n" "video_basename = video\n" "max_runtime_minutes = 0\n" "reset_timeout_on_video = false\n" "# GIF settings\n" "make_gif = true\n" "gif_max_seconds = 6\n" "gif_width = 320\n" "gif_fps = 12\n" "gif_suffix = _thumb\n" ) print(f"[i] Created default {INI_FILE}") else: config.read(INI_FILE) if "settings" not in config: config["settings"] = {} for key, val in DEFAULTS.items(): if key not in config["settings"]: config["settings"][key] = val return config["settings"] # ----------------- Helpers ----------------- def get_png_files(watch_dir): return sorted([f for f in os.listdir(watch_dir) if f.lower().endswith('.png')]) def get_next_video_filename(watch_dir, basename): count = 1 while True: candidate = f"{basename}_{count:04d}.mp4" if not os.path.exists(os.path.join(watch_dir, candidate)): return candidate count += 1 def guess_pattern(filename): match = re.search(r"([^.]+)\.(\d+)\.png$", filename) if match: prefix, digits = match.groups() return f"{prefix}.%0{len(digits)}d.png" return None # ----------------- Core Actions ----------------- def convert_sequence_to_mp4(watch_dir, first_file, ffmpeg_path, framerate, video_basename): """Convert PNG sequence to MP4. Returns absolute path to MP4 on success, else None.""" pattern = guess_pattern(first_file) if not pattern: print(f"[!] Could not determine pattern from {first_file}") return None output_name = get_next_video_filename(watch_dir, video_basename) output_path = os.path.join(watch_dir, output_name) print(f"[+] Converting to MP4: {output_name}") try: subprocess.run([ ffmpeg_path, "-y", "-framerate", str(framerate), "-i", pattern, "-c:v", "libx264", "-pix_fmt", "yuv420p", output_path ], cwd=watch_dir, check=True) print(f"[✓] Video saved as: {output_name}") return output_path except subprocess.CalledProcessError as e: print(f"[!] FFmpeg failed: {e}") return None def move_sequence_to_archive(watch_dir, png_files): now = datetime.now().strftime("%Y%m%d_%H%M%S") archive_dir = os.path.join(watch_dir, "processed", now) os.makedirs(archive_dir, exist_ok=True) for f in png_files: shutil.move(os.path.join(watch_dir, f), os.path.join(archive_dir, f)) print(f"[→] Moved PNGs to: {archive_dir}") return archive_dir def make_gif_thumbnail(video_path, ffmpeg_path, gif_width, gif_fps, gif_max_seconds, gif_suffix): """ Create a small animated GIF from MP4 using palettegen/paletteuse for quality. Saves next to the MP4: e.g., video_0001_thumb.gif """ base, _ = os.path.splitext(video_path) gif_path = f"{base}{gif_suffix}.gif" palette_path = f"{base}_palette.png" # Build common filter chain: fps + scale + split for palette vf_chain = f"fps={gif_fps},scale={gif_width}:-1:flags=lanczos" # Optional duration clamp (0 = full length) duration_args = [] if gif_max_seconds > 0: duration_args = ["-t", str(gif_max_seconds)] try: # 1) Palette generation subprocess.run([ ffmpeg_path, "-y", "-i", video_path, *duration_args, "-vf", f"{vf_chain},palettegen=stats_mode=diff", palette_path ], check=True) # 2) Palette use subprocess.run([ ffmpeg_path, "-y", "-i", video_path, *duration_args, "-i", palette_path, "-lavfi", f"{vf_chain}[x];[x][1:v]paletteuse=dither=bayer:bayer_scale=5", gif_path ], check=True) # Clean up palette try: os.remove(palette_path) except OSError: pass print(f"[✓] GIF thumbnail created: {os.path.basename(gif_path)}") return gif_path except subprocess.CalledProcessError as e: print(f"[!] GIF creation failed: {e}") return None # ----------------- Monitor Loop ----------------- def monitor(settings): watch_dir = settings["watch_dir"] ffmpeg_path = settings["ffmpeg_path"] framerate = int(settings.get("framerate", 24)) timeout = int(settings.get("timeout", 5)) video_basename = settings["video_basename"] max_runtime = int(settings.get("max_runtime_minutes", "0").strip()) * 60 reset_on_video = settings.get("reset_timeout_on_video", "false").lower() == "true" make_gif = settings.get("make_gif", "true").lower() == "true" gif_max_seconds = int(settings.get("gif_max_seconds", "6").strip()) gif_width = int(settings.get("gif_width", "320").strip()) gif_fps = int(settings.get("gif_fps", "12").strip()) gif_suffix = settings.get("gif_suffix", "_thumb") print(f"👁️ Monitoring folder: {watch_dir}") print(f"[i] FFmpeg: {ffmpeg_path}, timeout: {timeout}s, framerate: {framerate}fps") if max_runtime > 0: print(f"[i] Will auto-exit after {max_runtime // 60} minutes (unless reset)") if make_gif: print(f"[i] GIF: width={gif_width}, fps={gif_fps}, max_seconds={gif_max_seconds} (suffix='{gif_suffix}')") start_time = time.time() previous_files = set(get_png_files(watch_dir)) last_change_time = time.time() while True: # Escape to exit if keyboard.is_pressed("esc"): print("[✋] Escape key pressed. Exiting.") break time.sleep(1) # Auto-exit if timer exceeded if max_runtime > 0 and (time.time() - start_time > max_runtime): print("[!] Max runtime reached. Exiting.") break current_files = set(get_png_files(watch_dir)) if current_files != previous_files: previous_files = current_files last_change_time = time.time() continue if current_files and (time.time() - last_change_time > timeout): png_files = sorted(current_files) print(f"[⏳] Sequence complete: {len(png_files)} files") # 1) Make MP4 mp4_path = convert_sequence_to_mp4(watch_dir, png_files[0], ffmpeg_path, framerate, video_basename) # 2) Move PNGs move_sequence_to_archive(watch_dir, png_files) # 3) Make GIF thumbnail (optional) if mp4_path and make_gif: make_gif_thumbnail( video_path=mp4_path, ffmpeg_path=ffmpeg_path, gif_width=gif_width, gif_fps=gif_fps, gif_max_seconds=gif_max_seconds, gif_suffix=gif_suffix ) previous_files = set() last_change_time = time.time() if reset_on_video: print("[i] Timer reset after video creation.") start_time = time.time() print("[✓] Monitoring stopped.") # ----------------- Entry ----------------- if __name__ == "__main__": try: settings = load_config() monitor(settings) except KeyboardInterrupt: print("\n[✓] Monitoring stopped by user.") The updated watchfolder.ini settings file: [settings] watch_dir = F:/renderfolder ffmpeg_path = ffmpeg framerate = 24 timeout = 5 video_basename = video max_runtime_minutes = 30 reset_timeout_on_video = true make_gif = true gif_max_seconds = 6 ; 0 = full length gif_width = 320 ; scaled width, height auto-preserved gif_fps = 12 ; frames per second in GIF gif_suffix = _thumb ; appended before .gif
  2. I returned to this to add a new option. After successful creation of the MP4 video and moving of the PNG sequence the script creates a thumbnail gif animation using the MP4 video. A few observations. We can render to the watchfolder or simply copy/paste a sequence into the directory. Either way the watchfolder script will see new images arrive and respond accordingly. I fired up Netrender and rendered to the watchfolder** and any excuse to use Netrender is a good excuse right? This isn't using Netrender's native ability to run scripts after completion but that might be something to consider as we could have Netrender communicate with the Watchfolder script to pass project names and more over to the script. One thing I forgot in the interim from using the watchfolder utility was that the .ini settings override the settings in the script itself so I kept wondering why even though I had changed the location of the watchfolder in the script it refused to watch the directory I specified. Well, Rodney, that's because computers only do what you tell them to do and you told this one to use the directory set in the .ini file. Once that was updated... all very good! At any rate, render a sequence of PNGs and automatically get a MP4 video, a smaller gif animation preview and datetime stamp a directory holding all of the PNGs. Rather quick too I must say. And while we are talking utilities to work with A:M files... Here's a test of a program that visits a github repository, previews the file (if preview image found), allows the file to be downloaded AND, if a zip file is located in that resources directory activates a button to allow that zipfile to be downloaded. Its more of a proof of concept than anything very useful. I'd like to have the program look inside the Animation:Master resource and share the preview/icon image stored there (if present) and display the File Info text (if present). Now that I've experimented with extracting the icon previews out of A:M files I think I might be up to that challenge. Note1: The program first looks for a preview that belongs to the actual resource. For instance, if cube.mdl has a PNG image in same directory named cube_preview.png it will display that. If that preview is not present the program will look for a preview.png image in that directory.. If that image isn't present it displays a default image. It'd be overkill but fun to have it have an option to look for an animated gif. Note2: The token field is what allows more usage via github. Github tokens can be set to expire and the one I'm using in this test expires in September. When scanning through a large repository the user will run out of free access quickly so having the token helps a lot. Without token: 60 requests per hour With token: 5000 requests per hour The count resets every hour. Note3: This demo does not use git although there is no reason why it couldn't be added so that models could also be uploaded to the repository. The exploration here was focused accessing resources that are online.
  3. Rodney

    Hero Dude

    Um... well... I needed a model to test with and Herodude just happened to be here. After an initial generic standing pose I decided to try a 'trip and fall'. I suppose this might be a setup that would convey the general idea although I think it'd pay off to start over from scratch now that the general idea is there. Added a side view that I rendered just prior to doing the front camera view.
  4. I've been messing about with a 'watchfolder' script (in python) that monitors a directory and when it finds a new sequence of PNG images it converts the sequence to MP4 video and moves the PNG sequence to a datetime stamped directory inside a 'processed' directory. There are a number of things that are still rough about the program. Firstly, the majority of people will not install python, set it up, run python programs etc. right? Correct. So, I compiled it into an executable .exe file. That seems to work pretty well. The python script has a watchfolder.ini file where users can quickly adjust settings. If no .ini file is found default locations and values are used. [settings] watch_dir = F:/watch_folder ffmpeg_path = ffmpeg framerate = 24 timeout = 5 video_basename = video max_runtime_minutes = 30 reset_timeout_on_video = true Here we can see: - The script uses a specific directory/folder so that's where the PNG sequence would need to be rendered to - The script uses FFMPEG for the conversion and the path here suggests it is in the users environmental settings. Perhaps better to specifically state where the FFMPEG executable files are located. For example: C:/ffmpeg/bin - Framerate can be changed to allow more (or less) frames to be generated. - The timeout is in seconds and suggests how long the utility waits to see if another frame is being generated. If frames are expected to take longer than 5 seconds to render this value should be increased. - The base name of the output video can be changed here (it's just named video by default and new videos get incremented with a number each time a new video is created (video1.mp4, video2.mp4, etc.) - Max runtime (if set) limits how long the watchfolder program will monitor the folder. - The timer for the max runtime can be set to refresh each time a new video is created so a new 30 minute timer starts. Set to false if a reset of the timer is not desired. The actual python script: import os import time import shutil import keyboard import subprocess import re from datetime import datetime import configparser # === DEFAULT CONFIG === DEFAULTS = { "watch_dir": "F:/watch_folder", "ffmpeg_path": "ffmpeg", "framerate": "24", "timeout": "5", "video_basename": "video", "max_runtime_minutes": "0", "reset_timeout_on_video": "false" } INI_FILE = "watchfolder.ini" def load_config(): config = configparser.ConfigParser() if not os.path.exists(INI_FILE): config["settings"] = DEFAULTS with open(INI_FILE, "w") as f: config.write(f) print(f"[i] Created default {INI_FILE}") else: config.read(INI_FILE) for key, val in DEFAULTS.items(): if key not in config["settings"]: config["settings"][key] = val return config["settings"] def get_png_files(watch_dir): return sorted([f for f in os.listdir(watch_dir) if f.lower().endswith('.png')]) def get_next_video_filename(watch_dir, basename): count = 1 while True: candidate = f"{basename}_{count:04d}.mp4" if not os.path.exists(os.path.join(watch_dir, candidate)): return candidate count += 1 def guess_pattern(filename): match = re.search(r"([^.]+)\.(\d+)\.png$", filename) if match: prefix, digits = match.groups() return f"{prefix}.%0{len(digits)}d.png" return None def convert_sequence_to_mp4(watch_dir, first_file, ffmpeg_path, framerate, video_basename): pattern = guess_pattern(first_file) if not pattern: print(f"[!] Could not determine pattern from {first_file}") return output_name = get_next_video_filename(watch_dir, video_basename) output_path = os.path.join(watch_dir, output_name) print(f"[+] Converting to MP4: {output_name}") try: subprocess.run([ ffmpeg_path, "-y", "-framerate", str(framerate), "-i", pattern, "-c:v", "libx264", "-pix_fmt", "yuv420p", output_path ], cwd=watch_dir, check=True) print(f"[✓] Video saved as: {output_name}") except subprocess.CalledProcessError as e: print(f"[!] FFmpeg failed: {e}") def move_sequence_to_archive(watch_dir, png_files): now = datetime.now().strftime("%Y%m%d_%H%M%S") archive_dir = os.path.join(watch_dir, "processed", now) os.makedirs(archive_dir, exist_ok=True) for f in png_files: shutil.move(os.path.join(watch_dir, f), os.path.join(archive_dir, f)) print(f"[→] Moved PNGs to: {archive_dir}") def monitor(settings): watch_dir = settings["watch_dir"] ffmpeg_path = settings["ffmpeg_path"] framerate = int(settings.get("framerate", 24)) timeout = int(settings.get("timeout", 5)) video_basename = settings["video_basename"] max_runtime = int(settings.get("max_runtime_minutes", "0").strip()) * 60 reset_on_video = settings.get("reset_timeout_on_video", "false").lower() == "true" print(f"👁️ Monitoring folder: {watch_dir}") print(f"[i] FFmpeg: {ffmpeg_path}, timeout: {timeout}s, framerate: {framerate}fps") if max_runtime > 0: print(f"[i] Will auto-exit after {max_runtime // 60} minutes (unless reset)") start_time = time.time() previous_files = set(get_png_files(watch_dir)) last_change_time = time.time() while True: # Check for Escape key press if keyboard.is_pressed("esc"): print("[✋] Escape key pressed. Exiting.") break time.sleep(1) # Auto-exit if timer exceeded if max_runtime > 0 and (time.time() - start_time > max_runtime): print("[!] Max runtime reached. Exiting.") break current_files = set(get_png_files(watch_dir)) if current_files != previous_files: previous_files = current_files last_change_time = time.time() continue if current_files and (time.time() - last_change_time > timeout): png_files = sorted(current_files) print(f"[⏳] Sequence complete: {len(png_files)} files") convert_sequence_to_mp4(watch_dir, png_files[0], ffmpeg_path, framerate, video_basename) move_sequence_to_archive(watch_dir, png_files) previous_files = set() last_change_time = time.time() if reset_on_video: print("[i] Timer reset after video creation.") start_time = time.time() print("[✓] Monitoring stopped.") if __name__ == "__main__": try: settings = load_config() monitor(settings) except KeyboardInterrupt: print("\n[✓] Monitoring stopped by user.") My take is that this option for a hotwatch directory and execution of ffmpeg script would be best added to Animation:Master itself but if there is interest we can pursue this and more. This script only converts PNG sequences to MP4 video but all manner of video formats is possible and even gif animation. The utility currently does not have an interface/GUI but that would be a next step that allows the user to adjust settings in the interface and even opt for different outputs. Here's the sequence I was testing with: video_0017.mp4
  5. Ruffle has a demo site that .swf file can be loaded into: https://ruffle.rs/demo/
  6. I've had problems converting .swf file and have to remind myself what approach to use... particularly with the .swf files created by Hash Inc with Camtasia which uses the proprietary Techsmith video codec. THIS program will allow viewing of those .swf videos: https://ruffle.rs/#downloads That's a start. If worse comes to worse that program (or similar) could be used to play the .swf files while recording them and saving into a different format. BUT... There's surely another solution. Aside: When I use converters such as FFMPEG the result is cropped imagery that likely is the result of Techsmith's codec which only records changes (cursor locations etc.) Also: FFPLAY (the player that comes with FFMPEG) will play .swf videos such as the TechTalks and the audio is fine. It's the cropped elements of the screen that are problematic. They play but... the visuals are chaotic (see above). Added: A screenshot from the playback of the Composite TechTalk:
  7. Outstanding!
  8. Congratulations everyone! Excellent work.
  9. @gazzamataz It's always great to see you. From time to time over the years I've seen projects here in the A:M Forum that really capture my curiosity and not just from a general perspective but from one that I guess I would call 'commericial'. In most of these I think... this is a really intriguing concept. Where things get a bit more fluid is where I think, "is this commercial enough?" and "How could this be simplified/polished". I should say that your project reminds me of some of the classics like "HR Puff n' Stuff" which really capture the imagination (and definitely did when I saw the show as a kid. Music has it's songs that are 'ear worms' that we find ourselves humming or thinking of frequently. When thinking of HR Puff 'n Stuff some of the design elements are like this. I think of Cling and Clang as supporting characters but almost all of the characters have an odd appeal that captures the attention. Your 'Bella the Bear' has some of this 'odd intrique' and you've put a lot of effort into the concept over the years. As you've suggested the look and feel is definitely vintage early 2000 CGI which in its own way has some appeal but for most probably lends its self more to curiosity than commericial viability. So to the question at hand which is: ```should I revisit my project once more to see if I get anywhere with it or is the market for children’s characters and stories so saturated I should just tinker with my project for the fun of it?``` I do think you should revisit the project but I would suggest for the moment tinkering with a 'for the fun of it' focus. You already know Bella and Friends could use a refresh in order to test if the concept can be made more viably commercial. While you don't care 'Bella' to be something you don't want it to be it would be a good exercise to consider what a commercial marketing house that took on the project might do to make the concept more marketable. I don't know how familiar you are with Eastman and Laird's experience with 'Teenage Mutant Ninja Turtles' but their story is something of a legend. There comic book characters and concepts where purchased and adapted to 'kid friendly' animation and the world took that and ran with it. Some of that success was rather problematic... and lots of changes to characters and concepts happened. All of this to suggest that you firstly and foremostly need to make yourself happy. (While still having enough money to eat and have a roof over your head of course!) You might need to let go of your characters a little (but not to the point where you can't always do your own take as you see fit... in other words: reserve some rights to personal use). If tomorrow someone offered you a truckload of money to license 'Bella the Bear' and they ran with it... where would they take the concept? While having fun... within your limits of production capability... and while waiting for lightning to strike, insert yourself as that someone with the truckload of money and... run with it. I would start with the title concept "Bella the Bear". Bella is a bear. There's where I would start.
  10. I haven't used Resolve much lately so I'm not sure I'm the one to create a tutorial on that. The first step though is: Download the program(s)
  11. Here's a bit of behind the scenes from one of @Pizza Time's projects using one of Dan's models. The tie fighter is being hit by laser fire which causes an explosion so I added bones for the wings to allow them to fly off. btsbonedtiefighter.mp4
  12. It's all 'math' to me... My question as it relates to 3D modeling is: How does this contribute to continuity of adacent (relatively) planar surfaces? There is another element of this video suggests 0-0=0. But is this more 'pulling rabbit from hat'* in considering that an x - y = z where none of the variables are (invariably) equal? How equal are they and when are they equal? *To use a phrase from the video. This underlying schemes concerns polygons. Useful for math but historically an obstacle for splines and patches. Terms to consider regarding continuity: Triagons, non intersecting diagonals Potentially useful: The Hexagon is a useful construct in that it can be divided into quadrilaterals. Going deeper down the rabbit hole: In the second video we are introduced to the idea of a null subdigon which is the equivalent of a two point line. This is also referred to as the 'roof' of the shape under consideration.
  13. As a workaround until that gets fixed you might try applying the video/reference to a single patch (as a patch image). Decals also work when timing is animated so that is another option. Layers... do NOT appear to work so they are likely effected by the same bug as Rotoscopes.
  14. Here's a quick Shuttle landing using one of these models: A few notes: - The initial arrival is just the same image scaled up (well, actually scaled down in reverse to frame 1) - The initial fade in of the arrival is just a black image faded out over time - The iris out at the end is an animated gradient fill - Started to add some subtleties to the shot such as the platform exending to connect the landing pad but that's hard to see. Other than the wings of the shuttle that's the only thing rigged with bones in the shot. ShuttleLanding.mp4
  15. Robert, Roger and I were discussing various topics and some of Walter Lantz's drawing and animation resources were shared and discussed. Here's one that we didn't discuss on creating characters that includes a storyboarding session. Several currrent day legends in the animation business such as Eric Goldberg claim that watching Walter Lantz's shows delving into the process of animation were early inspirations to them. What got me thinking in the direction of Walter Lantz was his book 'The Easy Way to Draw' which I had never heard of but have recently added to my library.
  16. Change: Along with processing the different values of coins we add some error checking in this one.
  17. Attempt at Scrabble: Note for the curious: A key element of the program is changing inputs to uppercase.
  18. Guest speaker for the CS50 course in 2005... some guy named Mark: As is the case these days... not a lot of people in attendance in the class.
  19. I seriously have issues... JUST DO THE ASSIGNMENT. Is that so hard? Me: I think I'll change the assignment to make a pyramid instead. Gah! J U S T... D O... T H E... A S S I G N M E N T !
  20. Wow! Outstanding. That's a lot of characters. I hadn't realized just how many characters you've created.
  21. In Week 0 the Harvard CS50 course demos programming using the learning tool called 'Scratch'. The first intro course I took in programming used 'Alice'. I liked Alice because it could load OBJ models and Animation:Master could output OBJ models! Scratch is more popular and used more often and its likely if you are younger than 30 you've been exposed to it if you had any computer related classes in school. In Week 1 the course move on to using the C language. This being quite useful as C++ derives much of its standard usage from C. And C++ is what drives programs like Animation:Master.
  22. Here's your chance to master the art of programming and computer science. The course is starting today (officially) but has been ran continuously for the past few years. The course is self paced. Link: https://www.edx.org/learn/computer-science/harvard-university-cs50-s-introduction-to-computer-science Take the plunge. You know you want to. You know you need to.
  23. My current take on this gap between bipartite grids and four color theory is that at the moment we join 'areas' (grid squares) we need to establish a new 'color'. According to the science we don't need more than 4 colors but we can have as many colors as we want. So... Underlying the whole gamut of shape and group assignments our algoritm can chug away at reducing to 4 colors. We then dictate in some fashion the shapes and extents of those areas and build upon and extrapolate from that. To the observant this might appear to place us at the intersection between raster and vector graphics. Attached is this 'nonbipartite' grid project: nonbipartite.prj
  24. Here's an example of a non-bipartite grid, meaning that no two grid squares of the same color touch (even at the corners). If they could touch at the corners they could be termed 'bipartite'. In A:M we can work around this by having multiple groups of the same color. In effect, masking or hiding what is actually happening. In other words, presenting a grid that appears bipartite when in fact it is not. Something worth observing here might be that initial choice of what grid squares were white (given that underneath it all all the grid squares are black). In the first row our white group has started with the second patch. In the second row we shift and choose the patch to the left. We could have just as easily chose to shift right and add that to our group instead. There is something of significance in this choice as it sets the stage for what other grid squares can be selected and included in our group and what grid squares must be left out. But we must make a choice... so is one choice more correct than the other? Should we turn left or turn right? As with continuity it would at least intially appear that consistency is key. Our decision being made we must proceed and deal with the consequences.
×
×
  • Create New...