sprockets The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen Character animation exercise by Steve Shelton an Animated Puppet Parody by Mark R. Largent Sprite Explosion Effect with PRJ included from johnL3D New Radiosity render of 2004 animation with PRJ. Will Sutton's TAR knocks some heads!
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

Changes to Multipass subdivision in v18f


Recommended Posts

  • Hash Fellow

First, if you never use more than 25 multipasses this was a non-issue and even if you do it was mostly undetectable. The following is only for the painfully curious. B)

 

One way A:M does motion blur is with multi-pass. It slightly increments the point in time being rendered for each pass and moving objects appear blurred as the passes are blended together. As the number of passes is made larger the fraction of time between passes must be smaller.

 

It turns out A:M stores time in "ticks", not seconds or frames, and A:M counts 135000 ticks per second. Every keyframe you make is stored internally as a number of ticks elapsed from zero. This makes it possible to change your PRJ from 24fps to 30 fps, for example, and not have your animation run 25% faster.

 

A:M can count up to 18,446,744,073,709,551,616 ticks which is somewhat more than 4.3 million years.

 

135000 was chosen because it is evenly divisible by all the common frame rates and further divisible beyond that for most multipass intervals.

 

For example, at 24 fps, each frame is 5625 ticks apart. At 20% motion blur the multipass increments must be apportioned over 1125 ticks and with 25 passes each pass is exactly 45 ticks apart in time.

 

Previously, A:M would calculate that ticks-per-pass number at the beginning of a render, multiply that by the number of the pass it was about to render and then add that to the number of ticks that represented the start of the current frame to get the time of the current pass.

 

Up to 25 passes that calculation worked fine, but beyond that problems occurred.

 

At 121 passes, 1125 divides to about 9.2975 but in ticks that has to be either 9 or 10. Ticks can only be whole numbers.

 

With many passes the small error would accumulate to a large error with the result that motion blur increments would either not make it to their full distance of desired blur OR they would get to the full distance too soon.

 

This composite shows a bright white dot constrained to travel a path, from the brown lines on the left to the brown lines on the right, in 20% of one frame with multipass set to 20% motion blur.

 

At 121 passes the dot has traveled less than the 100 pass version and at 196 passes the last dot is brighter than the rest, indicating that several passes have rendered it in the same spot.

 

20percent.png

 

For motion blur purposes this error was usually unnoticeable if the motion blur was too short, but in the over-reach cases it could create a detectable solid image at the end of a blur as too many passes were being assigned the exact same instant of time. This was also a problem for light-on-a-spline situations with the light ending up stuck at the end of a path for part of its intended sweep.

In v18f this is fixed!

 

Steffen has changed the multipass subdivision calculation so that it now calculates the ticks for the current pass as

 

(current pass number/total passes)*motion blur fraction* ticks/frame

 

This eliminates accumulating error and the resulting tick count for each pass is never off by more than a fraction of a tick.

 

Thank you, Steffen, for fixing that! clap.gif

Link to comment
Share on other sites

  • Replies 21
  • Created
  • Last Reply

Top Posters In This Topic

  • Hash Fellow

I'll note that this 135000 value is part of the difficulty in implementing a 29.97 fps rate for NTSC video.

 

The official frame rate of NTSC video is not truly 29.97 but 30x1000/1001 which is roughly 29.97002997002997...

 

If we divide 135000 by(30x1000/1001) we get 4504.5 ticks per frame which is not a whole number. If perhaps, in the long ago days of Hash, they had chosen 270000 ticks per second as the basic value then we could do that division for exactly 9009 ticks per frame.

 

Another part of the problem is that "29.97" still imagines there are 30 frames for every second and counts that way. That accumulates as an error between the time it has counted and the time that passes in the real world.

 

After 60 seconds, "29.97" has counted one fewer 30ths of a second than have passed in real time. By two minutes, 29.97 is two frames behind in its counting.

 

To catch up, every two minutes "29.97" skips two counts. After the frame it calls 00:01:29 (which is really only happening at about 00:02:01 in real time) the next one is called 00:02:02. This is called "drop frame" even though it is actually dropping numbers and not video frames.

 

So there are three problems in implementing 29.97 on A:M

 

-Our tick value is not evenly divisible for that frame rate

-the time scale at the top of the timeline would have to accurately exclude two unwanted frame numbers every two minutes

-the time scale would have to somehow deal with the fact that almost none of the frame numbers it displays are truly in synch with real time. This is a problem for synching with audio.

 

None of these are unsolvable but 29.97 is a rare need for animators. Most of our work is for internet video which doesn't need 29.97 and animators in general rarely do a single shot that is long enough for the time discrepancy to be noticeable. We edit our single shots together in a video editor that conforms them to 29.97 if needed and the synch error for any single shot in that new, slower 29.97 fps is always under one frame which will not be noticeable as an audio synch problem.

Link to comment
Share on other sites

  • Admin

The problem I have is where folks say that "just because a frame is labeled 00:00:01:00 does not mean that 1 second has passed".

It should mean exactly that although there is an important part of the equation missing (i.e. 00:00:01:00@30 frames per second(FPS)).

 

How many frames have been presented at 00:00:03:00?

Well, the answer depends on the frame rate.

If the frame rate is 1FPS then the answer is 3 frames.

If 60FPS then the answer is 180 frames.

 

It is when we get into the floating point numbering that things get harder to follow but only because these were devised for a special case (NTSC color video). I note that NTSC black and white video, although rarely used these days would still be 30fps. (Why do I sense that interlacing is related to this?)

 

But things truly get weird when the labels suggest something is occurring that isn't.

For example, the term 'drop frame' would suggest frames are dropped when in fact that isn't the case.

 

In the end it should be enough to know that when we see a timecode we can immediately suggest how much time that represents (00:01:00:00... okay that is 1 minute's worth of frames) BUT we need to ask the all important question "At what Frames Per Second (FPS)?" before transferring our work to someplace else.

 

I *almost* understand this (because I can easily understand that 1 does not always equal 1) but that *almost* is also what can make it so frustrating.

 

There is an aspect of this that suggests to me that (similar to linear/nonlinear color space) publishing to NTSC should be a process reserved for the final phase of a production in order to limit the accumulation of error.

 

It *was* my understanding that A:M resolved this discrepancy in realtime vs frames behind the scenes.

But now Robert has introduced a new label into the equation that needs to be defined. (and he did that above)

 

If we divide 135000 by(30x1000/1001) we get 4504.5 ticks per frame which is not a whole number. If perhaps, in the long ago days of Hash, they had chosen 270000 ticks per second as the basic value then we could do that division for exactly 9009 ticks per frame.

 

So, as I understand it:

A:M stores time in "ticks", not seconds or frames, and A:M counts 135000 ticks per second although we don't have any direct means of seeing these representative ticks.

135000 was chosen because it is evenly divisible by all the common frame rates and further divisible beyond that for most multipass intervals.

I'm unclear if there is a formal request to consider using 270000 ticks per second (twice that of the original implementation) being considered or whether this was recently implemented.

 

In essence (and please feel free to correct me here in my terminology!) the 135000 ticks per second can be considered A:M's internal timekeeping codec.

The codec being the program (A:M's implementation) used to read and write files.

As the codec is designed for use only internally inside A:M its integrity is maintained via the self correcting nature of its implementation.

In other words A:M's internal timekeeping is highly accurate and therefore the resulting timecode and fps from internal conversions can be relied upon.

At some point it seems that we'd be calibrating individual pieces of hardware if we wanted any additional level of certainty.

Link to comment
Share on other sites

  • Hash Fellow
The problem I have is where folks say that "just because a frame is labeled 00:00:01:00 does not mean that 1 second has passed".

It should mean exactly that although there is an important part of the equation missing (i.e. 00:00:01:00@30 frames per second(FPS)).

 

If they mean that the frame labeled 00:00:01:00 arrives slightly later than real time 1 second, because 29.97 was counting slightly too slow, I agree with them.

 

After 00:00:00:00, it isn't until you get to 00:00:02:02 that a frame number and the time match, because we threw out two numbers to catch the counting up. And then the frame numbers start arriving later and later again after that until we skip from 00:00:03:59 to 00:00:04:02 to catch the counting number up again to real time.

 

 

It *was* my understanding that A:M resolved this discrepancy in realtime vs frames behind the scenes.

But now Robert has introduced a new label into the equation that needs to be defined. (and he did that above)

 

For the whole frame rates you CAN set in A:M there is no discrepancy between the frame count and true elapsed time, within the limits of computer accuracy of course.

 

A:M sidesteps the problem of counting in 29.97 by not letting you set it. :D

 

 

 

 

It is when we get into the floating point numbering that things get harder to follow but only because these were devised for a special case (NTSC color video). I note that NTSC black and white video, although rarely used these days would still be 30fps. (Why do I sense that interlacing is related to this?)

 

But things truly get weird when the labels suggest something is occurring that isn't.

For example, the term 'drop frame' would suggest frames are dropped when in fact that isn't the case

.

 

I'll note that industry opinion is beginning to accept the fact that 29.97 was never really necessary. No one has ever demonstrated the supposed interference problem that would happen by running color at true 30fps and the whole debacle is traced to a mistaken engineering report produced during the rush to set a color TV standard.

 

 

 

link

 

 

The report, written by an obscure General Electric engineer, went on to say that if the difference signal happened to be an odd multiple of one-half the scan rate, then this beating would be reduced. If the frame rate was dropped 0.10001%, then the scanning frequency would be 15.734264 kHz, the chrominance subcarrier would be 3.579545 MHz, and the beat product (if there was one) would be 920.455 kHz, which is very, very close to the 117th multiple of half the scan rate. Did you get all that?

 

But a close look at the technical documents and the committee proceedings around this point seem to show that the problem never really existed. According to Mark Schubin, technical editor and columnist for Videography magazine, there should not have been any cause for concern. Since “the sound carrier is FM, the frequency swings and is never in its nominal position so any beating wouldn't be steady, and therefore not visible.” Another video engineering expert, Tim Stoffel, says that a higher chrominance subcarrier frequency could have been used (495/2 fs), and the audio subcarrier also increased slightly to make the difference signal fall on the right multiple of the scan rate, and despite the change, “most [black and white] sets would have tolerated it at the time.” However, the TV set manufacturers' association “screamed bloody murder,” and so the decision was made to leave the carriers where they were (or pretty close) and change the horizontal scanning frequency and, consequently, the frame rate instead. This didn't make the transmitter manufacturers or the stations very happy, says Stoffel, because, “It meant much expensive alterations to transmission equipment, as the AC line could no longer be used as a frequency reference for sync!”

An engineer who was there at the beginning, Rollie Zavada of Eastman Kodak, diplomatically calls the decision to change the frame rate “debatable.” Other sources say that the very first generation of color sets, and also the black-and-white sets that were made by the time the color standard was adopted, had good enough filters on the audio section so that leakage between the subcarriers was simply not an issue.

 

You will still find Very Serious Video Professionals insist that it was necessary but they are just parroting the same mistaken premises that have been repeated for 60 years. None of them has ever seen the alleged problem in person and none of them ever will because it can't be produced..

Link to comment
Share on other sites

  • Hash Fellow
But things truly get weird when the labels suggest something is occurring that isn't.

For example, the term 'drop frame' would suggest frames are dropped when in fact that isn't the case.

 

I think "drop frame number" would have been a better name.

Link to comment
Share on other sites

  • Hash Fellow
I'm unclear if there is a formal request to consider using 270000 ticks per second (twice that of the original implementation) being considered or whether this was recently implemented.

 

No request has been made to change it to 270000. It seems like a big can of worms to open in pursuit of a feature we will rarely ever need.

 

AFAIK, 135000 ticks per second has been the A:M way for a long long time. I'm not sure I'd call it a "codec", I'd just call it the unit of time that A:M works with.

 

Paradoxically, the time in A:M files seems to be stored as seconds with a decimal if needed but not a very long decimal. There must be some internal code to fix the error introduced by the short decimal when the file is read in. Mistake on my part. See correction below.

Link to comment
Share on other sites

  • Admin

Not that we would want to do this but there are a few places where we can monitor/change (refine/define?) the internal framerate within A:M.

There is also a place where we can set 29.97FPS for output but that is in the Quicktime rendering settings.

 

As an aside: I note that there is something strange going on with the FPS being displayed down at the lowever left corner of the interface where it states FPS.

Either that or I'm not in sync with what is going on there.

 

In the attached image I saved out the same sequence of images as both 24FPS and 30FPS sequences and (as expected) A:M filled in the gaps by performing the translation.

One does have to be careful to make sure that in testing we don't adjust (or fail to adjust) a variable such as the FPS on the output of a Quicktime sequence.

I assume such a setting could effectively cancel out (reconvert) the sequence by adjusting the frames present again.

 

It's important to note that in both cases the length of the sequence remains the same at one second.

It's the number of frames (and subframes) that gets displayed within that one second that changes.

 

And of course there is Multipass itself which is the whole point of this thread.

I'm starting to learn more about that here!

Ticks.jpg

Link to comment
Share on other sites

  • Admin
the time in A:M files seems to be stored as seconds with a decimal if needed but not a very long decimal.

 

I am in way over my head here but it's fun to explore.

Perhaps this short decimal is a simple percentage representing the space between 'frames'?

In this way an extended decimal such as as .334234 would round to 33 percent and render the next frame accordingly.

There being no need for further refinement outside of multipass subframe rendering.

Link to comment
Share on other sites

  • Hash Fellow

I take it back! :o A:M is not writing time in files as a floating point second value (what I should have said instead of seconds and a decimal).

 

A:M is storing times as seconds plus 30ths of a second plus ticks

 

If I make some keyframes in a 30fps PRJ I get this. I've bolded the time values

 

MatchName=X

1 0 0 (0 seconds)

1 1:0 86.6137 (1 second)

1 2:0 118.124 (2 seconds)

1 2:1 133.621 (2 seconds and 1/30th)

1 2:2 37.6487 (2 seconds and 2/30ths)

1 2:15 -145.196 (2 seconds and 15/30ths)

 

 

If I just change that PRJ to 24 fps they still have their format of seconds-plus-30ths

 

MatchName=X

1 0 0

1 1:0 86.6137

1 2:0 118.124

1 2:1 133.621

1 2:2 37.6487

1 2:15 -145.196

 

If i snap those keyframes to 24fps positions then they get times in seconds-plus-30ths of a second-plus ticks (after the period I thought was a decimal)

 

MatchName=X

1 0 0

1 1:0 86.6137

1 2:0 118.124

1 2:1.1125 133.621 (2 seconds and one 30th plus 1125 ticks equals 2 seconds and 1/24th)

1 2:2.2250 37.6487 (2 seconds and two 30th plus 2250 ticks equals 2 seconds and 2/24th)

1 2:15 -145.196 (2 seconds and 15 30ths equals 2 seconds and 12 24ths)

 

The periods looked like a decimal to me but it is really just a separator between the 30ths and the ticks.

Link to comment
Share on other sites

  • Admin

There are some interesting aspects to all of this as a Chor doesn't store the FPS but the Project file will (I believe A:M assumes 30FPS in the saved file... and therefore doesn't even bother to write it... unless specifically set to something else). Well, now I can't get a project file to save without including the FPS. Go figure. I must have been looking at a Chor file only before.

 

Here for review is the File Info property for FPS:

 

FPS

Sets the frame rate of the selected Project.

Keyframes are associated with an exact time, measured in seconds.

The time depends on two things: when the keyframe was created and the frame rate set for the project.

 

With frame rate set at 30 framesper second (fps), frame 15 represents 0.5 seconds into the animation.

If the frame rate is set at 24 fps, the same frame 15 represents 15/24 second, which equals 0.625 seconds, which causes the keyframe to occur significantly later in the animation. Because of this, setting the project frame rate BEFORE animating is EXTREMELY important. The default is 30 fps.

 

However, after the animating is complete, changing the frame rate allows you to render different numbers of frames per second. This means you can create a 2 hour movie, render it out at 24 fps for film with the frame rate set at 24, then set the frame rate to 30 fps and render for video, and you will see the same 2 hours of animation with the same timing, just more frames were rendered.

Note that upper case emphasis is present in the original.

The bold portion is why I believe your estimation of 30ths of a second is not always going to be accurate.

If I understand this correctly that should only be the case when the FPS is set to 30.

 

Edit: In rereading your post perhaps that is what you were saying?

 

This is the write up from the Tech Ref that was planted in my mind to suggest that time itself is the constant in all of this.

While the over all time will be maintained the keyframes will adjust accordingly to maintain the desired (or should we say set) FPS.

We may desire a certainly result but we'll only get that if we have our settings correct.

Link to comment
Share on other sites

  • Hash Fellow

I believe that documentation info is wrong.

 

If I set a PRJ to 30 fps and create keyframes at 0 seconds, 1 second and 1:15 ( 1.5 seconds) A:M will write this in the file:

 

MatchName=X

1 0 0

1 1:0 -78.9453

1 1:15 100.872

 

 

If I start over and set a PRJ to 24 fps and make key frames at 0 seconds, 1 second and 1:12 (1.5 seconds again in the new PRJ) A:M will write this in the file:

 

MatchName=X

1 0 0

1 1:0 105.543

1 1:15 -87.7473

 

 

So whether you have the PRJ set to 24 or 30, A:M will always write the times in the file as a clock that counts in 30ths of a second (plus ticks if necessary).

 

One and a half seconds will always be represented as 1 second and 15/30ths no matter what fps the PRJ is set to. If you are in a 24 fps PRJ A:M will be clever enough to draw that keyframe at one second and 12 frames.

 

I can't create a case where A:M writes the values any other way. If you can that would be something to look at.

Link to comment
Share on other sites

  • Admin
I can't create a case where A:M writes the values any other way. If you can that would be something to look at.

 

I started playing with a file to check and see for myself but almost injured myself in the process.

I kept losing track of the variables.

That's when I noted that the FPS isn't stored in a Chor but rather in the Project which in and of itself makes sense because it is in the Project that we set the FPS. (And of course we can set the default FPS in the control panel so that A:M knows what to set FPS at when it creates a new Project).

 

Perhaps something works differently for Chors nowadays because some folks don't use Project files (Like Nancy... using only Chors... albeit they are surely embedded Chors) and yet they still maintain their FPS. But this seems a bit unlikely. I'm not sure what would have changed to make the documentation obsolete in this regard.

 

My initial theory was that FPS didn't matter because A:M treated it's internal time in such a way as to easily be converted.

This seemed to be corroborated when I didn't see the FPS entry in the text of the file but now that I see it there consistently I'm softening on that stance considerably.

Of course I can (and should) delete that entry and see what happens. :)

 

I suppose it is possible that at one time the documentation was accurate but since then things have changed.

There was one nagging item that suggested itself to me as a final solution but my tired brain decided it needed to sleep first before taking a closer look.

 

More exploration to do!

 

It might help if we were looking at the same file here.

Originally I was looking at an empty Project and an empty Chor but one must change something before there is something to measure. ;)

 

Here is the (empty) Project file I am currently working with:

 

[PROJECT]

ProductVersion=18

Release= PC

Selected=TRUE

FPS=1

Embedded=FALSE

CreatedBy=Rodney Baker

Organization=Fuj10n2020

Url=newartofanimation.wordpress.com

Email=rodney.baker@gmail.com

LastModifiedBy=Rodney Baker

FileInfoPos=248

[/PROJECT]

 

As an aside: Quite awhile ago I noted that 'Selected=TRUE' tag and thought how useful that might be for interactive tutorials that walked people through Projects. The idea being that we can remember to set the focus prior to saving a file (or if properly done via the text of the file later) so that when the Project is opened the correct place is already receiving a highlighted/selected focus. I need to remember to use that!

Link to comment
Share on other sites

  • Hash Fellow

I tried copying and pasting your text and saving as a .PRJ but I get "invalid project file" when I try to load it in A:M. :unsure:

 

I'm not sure what it's missing.

Link to comment
Share on other sites

  • Admin
I tried copying and pasting your text and saving as a .PRJ but I get "invalid project file" when I try to load it in A:M. unsure.gif

 

I'm not sure what it's missing.

 

I'm not sure what to tell you there.

If you paste everything from to into a textfile and name it with a .prj extension it should be an empty Project file.

Note that this should produce the same empty project file you'd get (with your personal information of course) when you select Project/New from the menu.

Nothing to see or measure there... just the ultra basic starting point.

 

Perhaps you also copied over the forum's [ PROJECT ] and [ /PROJECT ] tags that are used to embed a project's code into a post?

Link to comment
Share on other sites

Perhaps something works differently for Chors nowadays because some folks don't use Project files (Like Nancy... using only Chors... albeit they are surely embedded Chors) and yet they still maintain their FPS.

 

I do not use chors with embedded data either. But I'm not sure what you were implying by that statement. In any case:

 

I have set my default frame rate to 24 (tools/options/units).

 

When I start A:M (currently set to do nothing at start up - tools/options/global), I usually start with new/project, (which sets my frame rate to 24) and then import my chor. I only save chors (and of course models, materials, actions all in separate files). I do not save projects (unless forced)

 

If I am importing someone else's project then the frame rate gets set to whatever they set in their project.

 

If I import someone else's chor (into my default 24fps project) then A:M converts the chor key frames to 24fps, even if they meant their chor to be at 30.

 

In the past, I have been known to start a chor with fps = 6 or 12, then set some key frames, then when I am satisfied with the poses, will change to 24 fps, to work in transitioning. I did this to avoid setting extrapolations to hold.

 

Works (worked?) beautifully (I haven't worked that way in a long while)

Link to comment
Share on other sites

  • Admin
I do not use chors with embedded data either. But I'm not sure what you were implying by that statement.

 

I'm surely using the wrong terminology by using the word 'embedded' but I refer to the nature of a Chor that allows you to work with/save Chors without saving the Projects.

Perhaps 'referenced' would be more accurate but that doesn't quite fit either. Slightly unrelated... but for those that save Project files... Chors are embedded by default into Projects unlike other assets. This can be seen in the File Info property under the Chor where Embedded is set to On by default.

 

If I import someone else's chor (into my default 24fps project) then A:M converts the chor key frames to 24fps, even if they meant their chor to be at 30.

 

This is telling, and also a matter of interest.

While the over all time should remain constant the keyframes surely change.

This could be slightly problematic in that everyone sharing the files may not be using 30FPS so we may not always see the same thing.

This then would be another case where Project files are preferred method of file sharing.

I dimly recall a common FPS being emphasized during TWO to keep keyframes from changing.

The assumption being that because 30FPS was assumed Chor files could be used and Project files themselves jettisoned.

Link to comment
Share on other sites

  • Hash Fellow

There was a time when the fps rate was never saved in any file. The fps that A:M used was whatever was set in the Options panel and you needed to deduce what the correct setting was for whatever work you had loaded.

 

Sometime after we started TWO A:M was changed to write the fps setting in the PRJ.

 

I forget how got everyone to use 24fps for TWO since we were mostly saving chors and not PRJs.

Link to comment
Share on other sites

  • Hash Fellow

Thanks, David, that loaded.

 

If i take that PRJ set to 1 fps and make keyframes at 0 second, 1 second, and 1.5 seconds ( I have to manually enter 00:01:0.5 in the time counter to get that half-frame time) I get this in the file:

 

 

MatchName=X

1 0 0

1 1:0 109.575

1 1:15 -97.8573

 

 

 

So even when the PRJ is set to 1 fps, A:M still writes the keyframe times based on 30ths of a second.

Link to comment
Share on other sites

As crazy as it sounds, Many of the media players get way off when trying to put together the correct speeds.

 

Sound with audio dialogue often suffers terribly. The conversions between the original render, an editing software,

and then the player(Media Player, Quicktime, Vimeo, etc.) all work against the shining thing the artist's originally intended.

 

I have watched so many of my animations get trashed in this process.

Link to comment
Share on other sites

I forget how got everyone to use 24fps for TWO since we were mostly saving chors and not PRJs.

 

24 fps was chosen from early on with TWO, more likely through consensus, as it also means less frames had to be rendered (versus 30fps). In TWO we were saving both chors and projects, throughout the whole project. I would work locally with chors only, but would upload both the finished chors and the projects (referencing the finished chor) to the svn. T'Was silly.

 

So we got smart with SO.

 

With SO we were saving/uploading only chors (also 24fps) during the animation phase. But projects were then created only at the end of SO project, during the rendering phase (mainly by Holmes), as netrendering required it, and there were some things that got lost and are only saved with project data (eg rotoscope image references for fog, matcap image references, maybe environment maps used for IBL?, and other things that I can't remember)

Link to comment
Share on other sites

Join the conversation

You are posting as a guest. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...