## Tuesday, July 29, 2008

### Aegisub crack + keygen

Damn, it seems people started pirating Aegisub.

Though I wonder where the keys generated by that keygen should be entered... and what would they crack? And I'd love to know how they managed to compress Aegisub that much, it could really help our own distribution!

SPOILER: The site is probably fake, just making up torrents that don't exist, so I'll assume those numbers are fake too. The site also just links to a different website whose URLs look like they'd want you to sign up just to perform a search of their archive.

## Saturday, July 26, 2008

### The epic of Aegisub-tan

A while ago, jfs and I thought that it would be interesting to have a mascot for Aegisub, similar to the OS-tans from Futaba. She would be a chibi-style mascot, wearing light plate armor and holding a staff, and she would be seen in places like the splash screen, or in "loading" and "Aegisub crashed" dialogs, with different expressions to show her "mood" (e.g. crying when Aegisub crashes).

So we tried to contact several different artists, and many of them seemed to be quite interested in drawing at least a concept for her - but all of them eventually gave up. Is there any hope for an Aegisub-tan?

This is the description that I posted on the Wiki article (a little modified):

• "Chibi" style, 2 to 3 heads tall.
• Long white hair.
• On her left hand, she holds Aegis, the program's name giver (this).
• On her right hand, she holds a magic baton (Mahou Shoujo Lyrical Nanoha style) with a cog in it. This represents Automation.
• Below her knee, on her chest, shoulders and hands, she wears red light armor plating, matching the shield. These should be simple, but curvy and shiny. Above all, they should look very light (as in, opposite of heavy).
• Below her armor, she wears grey (black?) tights. They cover her body and legs, but not her arms or neck.
• Also on her head, she wears a black/grey (same colour as underlying clothes?) beret.

(Keep in mind that the above description is NOT set on stone, and is just how I visualized her).

I even attempted to draw her myself. This is my pathetic attempt:

At this moment, many people lost hope in an Aegisub-tan. We even considered paying an artist to draw her, but nobody was really interested in donating money to the cause.

In conclusion: does anyone feel like drawing her? What we really need at first, I believe, is a high-resolution concept art, to go on splash, logo, t-shirts, etc. After that, we would need several small vectors of her doing many different things, but it'll be much easier to find artists to draw THOSE after we have a final concept drawn.

## Friday, July 25, 2008

### How VSFilter renders border and shadow

If you've ever used the \be or the new \blur tag you might wonder, why do they blur edges and not everything?

The answer lies in how VSFilter internally handles fill, border and shadow, and the relationship between them.

The basic component of a subtitle rendering is the fill. The fill is the main shape of the text, ie. what you see if you disable border and shadow. (When I write "text" here it can just as well be a vector drawing made with \p1.)

I am keeping things simple here, there are some technical details in the actual implementation I'm skipping over because they aren't relevant for the discussion, although they actually impact the actual algorithm used greatly. I might discuss the detailed algorithm later.

When I talk about bitmaps in this post, they are single-channel bitmaps, ie. black/white bitmaps. Colour is applied during the painting-step, this is described in detail below.

When a subtitle is to be rendered, VSFilter first creates a bitmap of the text fill. It then sees if the text should have a "wide outline", ie. if a \bord tag is in effect. If there is a wide outline, it allocates an additional bitmap that will contain the widened region.

The widened region is the fill bitmap modified in a way so it's effectively "embolded", ie. the outline is expanded outwards, but the entire fill is still kept.

You might notice that the widened region looks a lot like the shadow. This is entirely correct, because it is used for rendering not just the border but also the shadow.

When the line is to be rendered, the fill bitmap is used as-is for fill, the widened region is used as-is for shadow, and the border is generated by subtracting the fill from the widened region, ie. the border is the part of the widened region that is not also in the fill.

Things work a little different when there is no border. In that case, the fill is painted as-is, but the shadow is also painted using the fill.

I wrote above that the fill and widened region bitmaps are black/white bitmaps, so what about colour? The bitmaps are simply re-coloured with the selcted colour during painting, or more correctly, used as alpha masks to paint a frame full of the colour.

Okay, so on to the blur effects.

The blur effects are applied to the bitmaps, either to the fill or to the widened region bitmap. If there is a widened region bitmap, it is applied to that one only and the fill is left alone. If there is no widened region (ie. no border) the blur is applied to only the fill bitmap.

This is why only the border blurs when you use \be or \blur along with a border, and why the fill does blur when you use blur on lines with no border: The fill bitmap is rendered on top of the blurred border, even though the border blur extends below the fill.

## Thursday, July 24, 2008

### VSFilter hacks

There's lots of versions of Gabest's VSFilter subtitle renderer around, some people are even still using versions that are several years old and are missing features and bugfixes. There's cases of people still distributing these old versions around, sometimes as part of a package with other software such as VirtualDub.

Now it happens that some time in 2006, Gabest seems to have lost interest in working on VSFilter, so nothing has happened from his side, not even bugfixes. Fortunately VSFilter is open source, so we have picked it up and made a fork. We have included both some existing bugfixes and improvements that existed as patches, and also made our own bugfixes and further improvements.

I'm going to talk about some of the improvements we have made throughout this post.

First, I'm going to assume you're familiar with most regular ASS override tags, if not you can get a refresher in our manual. Second, not all of these improvements are available in the version of VSFilter we ship with the current release (2.1.2) of Aegisub, but they should all be in the next one (which will be 2.1.3 or 2.2.0, depending on circumstances.)

Sometimes you need sub-pixel precision when positioning text and drawings. Normally you've only been able to get this by setting the script resolution (PlayResX and PlayResY) to something larger than the video resolution, but the "float \pos" patch changes this. It allows you to use decimal/floating point numbers for positioning lines and gives you up to 1/8th pixel precision.

The image to the left shows four lines of text positioned at different X coordinates. You can see how they move ever so slightly, although not a full pixel.

The tags \fax and \fay allow you to do shearing operations on your text. This is a bit like rotations, except that it's not. They are especially useful for doing perspective correction when you are typesetting signs rotated in 3D.

Shortly before Gabest dropped development of VSFilter, he introduced the "ASS2" format. This isn't very well known, and it only has minor changes over the original ASS format. One of the things was a new karaoke timing tag, \kt. While Aegisub can read ASS2 files it can't write them, so if you edit them in Aegisub you will lose some information. It doesn't have support for timing with the \kt tag either, but you can of course still use that and use all the additional features of ASS2 if you edit the file with a text editor.

So what is \kt? It's "set absolute timing", it allows you to move both back and forth in time of highlight without highlighting other syllables. It is probably best illustrated with an example:
{\kf10}ABC {\kt20\kf10}DEF {\kt10\kf10}GHI

When you render this example, you will first see ABC highlight. Then GHI will follow, because its highlight is set to start at time 10 by \kt. Finally, DEF will highlight because its start time was set to 20 by \kt. You can also use \kt to make karaoke syllables overlap in timing.

Originally the \be tag (Blur Edges) only allowed blurring to be turned on or off, and was very "weak", the blur effect was hard to notice at all. We have updated it so it supports variable strength blur effects now, and you can also use it with \t to animate the blur strength.

Because \be doesn't look very high at high values we have also introduced the alternate \blur tag, which performs the blur effect using a two-pass gaussian blur algorithm. This gives a much nicer and wider blur, especially at high values.

I will discuss why \be and \blur are "blur edges" effects and not "blur everything" effects tomorrow.

If you've ever wanted the shadow to be positioned differently than just "X down and X right", the \xshad and \yshad tags are probably just what you want. These work like \shad except that they set only the X or Y displacement. You can even use negative values with them!

Finally there's the \xbord and \ybord tags. They can be useful for various things, but one of the intentions was to better support anamorphic video: The \xbord and \ybord tags allows you to control the border width in X and Y direction separately. You can even disable border in one direction entirely!

You can also combine this with strong \be or \blur and maybe some shadow for other interesting effects.

Again, remember that many of these tags require a very new version of VSFilter. For example, the \blur tag was only added yesterday..! The next release of Aegisub will ship with a VSFilter version that supports all of them, so if you want to safe you can wait until then. If you're impatient, you can follow development on the Aegisub forum.

### Kanamemo: a tool for the apprentice weeaboo

Back in 2006, when I decided to learn Hiragana and Katakana, I looked around for flashcard programs to help me in my task. After finding that none of them actually worked as I thought that they SHOULD, I decided to roll my own. The result is Kanamemo:

It works by teaching you Hiragana and/or Katakana (at your choice) by "levels". Each level typically contains 5 different kana. It basically shows you a kana and asks you to enter its Hepburn roomaji transliteration. If you get it right, you get +1 point in it. If you get it wrong, you get -10 to it AND -10 to the one that you confused it with. Once all kana of a given level are at least at level 5 (or so, I don't remember the exact rules), you've learned them and it unlocks the next level.

It also never stops flashing old kana to you, but the probability of a given kana being picked is inversely proportional to how good you are at it - that way, it makes sure that you don't forget the ones that you learned earlier, while focusing on the ones that you struggle with.

I found that it works exceptionally well, and I could learn to read all of Hiragana and Katakana in 2 days, but if you're particularly diligent, you can probably do it in a day.

The source code for the program has been available at the Aegisub repository for a while. Here's a link.

I have been meaning to write a similar tool for kanji+words (by mining data from EDICT and KANJIDIC), but my sloth has been preventing me from doing so.

[EDIT] If you want to build it natively on Linux, see this.

## Wednesday, July 23, 2008

### Five cool things in Aegisub that people aren't aware of

One thing that I've noticed while talking to users of Aegisub is that there are a number of features that people just don't realize Aegisub has. Here are five of them:

(Note: these tips apply to version 2.1.2.)

If you have a video raw that you're working on, and you're just going to be performing audio timing on it, you don't need to demux the audio from it. Provided that the video is in a format that Aegisub can work with (that is: most files on Windows), you can just go to Audio->Open File and pick the VIDEO file as the audio file to open (you will need to change the open dialog's default filter to show those files). Note that this is different from "Open Audio From Video", that just loads the audio from the currently open video file.

2. Saving to non-ASS formats
Aegisub's primary format is ASS (this is due to a few technical reasons, such as avoiding accidental loss of formatting information), so you can't easily save to other formats. But it's not impossible! If you go to File->Export..., you can tell Aegisub to save in other formats, such as SRT, SSA, or Adobe Encore. It's also possible to force the "Save As" dialog to save directly to those formats, if you tell it to use the right extension (i.e., tell it to save as "foo.srt" and it will save as SRT).

3. Converting framerate with "Export"
The Export dialog has a "Transform Framerate" filter. If you enable it, it can perform a VFR->CFR conversion on your file (useful for hardsubbing to decimated VFR files). However, it can also do CFR->CFR conversion, that is, "ramp" the file. This can be useful if you have e.g. NTSC subtitles and want to speed them up by 4% for a PAL video. In that case, you would set Input to 23.976 and output to 25.

4. Saving screenshots
Often, you might want to share a screenshot of a subtitle with somebody else. Aegisub has tools to help you do just that - right click the video display, and you will have an option to save the current frame as a PNG file, or copy it as an image to the clipboard.

5. Copying to/from clipboard in plain-text
If you want to share some lines with somebody over IRC or an IM program, or you want to copy from another file, it might be useful to know that Aegisub performs copy and pasting of lines as raw plain-text. So if you copy lines from Aegisub, you can paste them in any text medium, and vice-versa. The same is valid for Style lines.

### Universal Subtitle Format: a post-mortem?

The Universal Subtitle Format (USF) was an ambitious project. It was a XML-based subtitle format, supposed to replace the old and problematic community standard, Advanced Substation Alpha (ASS). It was chosen as the default subtitle format for the Matroska multimedia container, and was the primary format of Medusa's unfortunate successor, ChronoSub.

It failed to achieve that goal.

This is what its page on CoreForge has to say:

The format is based on XML for many reasons: flexibility, human readability, portability, Unicode support, hierarchical system and easier management.
While I can certainly understand "flexibility" and "portability", I don't see why you need XML to have a portable format, or Unicode support. But they go ahead and claim human readability and easier management. Is that supposed to be a joke? It sure is human readable - compared to binary formats. But it's still an incredibly verbose format that no sane person would try to edit by hand. And how is it easier to manage? Only if they mean that it's easier to avoid horribly misshapen subtitle files (you know, the kind of file that's always floating around the community and that VSFilter will happily eat). And what's with "hierarchical system"? It is true, but isn't it also completely irrelevant? Subtitles are NOT intrinsically hierarchical - forcing them to be only complicates matters.

But let's have a look at the format itself. This is a simple "Hello World" in USF:
<?xml version="1.0" encoding="UTF-8"?><!-- DOCTYPE USFSubtitles SYSTEM "USFV100.dtd" --><?xml-stylesheet type="text/xsl" href="USFV100.xsl"?><USFSubtitles version="1.0">  <styles>    <style name="NarratorSpeaking">      <fontstyle italic="yes" color="#FFEEEE" outline-color="#FF4444"/>    </style>  </styles>  <subtitles>    <subtitle start="6.100" duration="4.900">      <text style="NarratorSpeaking">This is a demo of<br/>The Core Media Player<br/>subtitle format</text>    </subtitle>    <subtitle start="00:00:11.000" stop="00:00:15.000">      <text style="NarratorSpeaking">What can be done ?</text>    </subtitle>  </subtitles></USFSubtitles>
The above sample is the "official" sample included with the specs, stripped down to make a suitable "hello world". For comparison, I've re-created the script in ASS:
[Script Info]ScriptType: v4.00+PlayResX: 640PlayResY: 480[V4+ Styles]Style: NarratorSpeaking,Arial,20,&H00EEEEFF,&H000000FF,&H004444FF,&H00000000,0,-1,0,0,100,100,0,0,1,2,0,2,10,10,10,0[Events]Dialogue: 0,0:00:06.10,0:00:11.00,NarratorSpeaking,,0000,0000,0000,,This is a demo of\NThe Core Media Player\Nsubtitle formatDialogue: 0,0:00:11.00,0:00:15.00,NarratorSpeaking,,0000,0000,0000,,What can be done ?
Note: I removed the "Format:" lines from the above file. This is because, to the best of my knowledge, Sabbu is the only program that actually cares about those lines. Neither VSFilter nor Aegisub care if it's there or not, and both will, in fact, ignore it.

For further comparison, this is what the same script would look like in the current draft of AS5:
[AS5]ScriptType: AS5Resolution: 640x480[Styles]Style: NarratorSpeaking,,\i1\1c#FFEEEE\3c#FF4444[Events]Line: 0:00:06.10,0:00:11.00,NarratorSpeaking,,This is a demo of\NThe Core Media Player\Nsubtitle formatLine: 0:00:11.00,0:00:15.00,NarratorSpeaking,,What can be done ?
The first thing that we can notice there is that ASS is a much more "compact" format, and USF is more "readable" in the sense that you'll easily know what each thing does if you aren't familiar with the format - unless you are very familiar with the ASS format, the "Style" line should be incomprehensible. ASS is also more "horizontal" - that is, unless you cram things into the same line in USF, ASS will take less lines but those lines will be longer.

Here's the important point: USF is NOT designed to be written by hand. It's just too much effort to write all of that, and if you forget to close some tag somewhere, you'll break the entire file, which isn't an issue in an ASS-like format. And this is exactly where the problem is: there is no good editor that supports USF!

Lately, there has been a trend for XML-based subtitle formats. This is probably because XML is relatively easy to parse by a machine, and also because of the "buzz" associated with XML. But let's face it: subtitles are not best modeled by XML. The ASS format is a strange hybrid of an INI file, a CSV list, and TeX, and that works astonishingly well - that's why we have decided to base AS5 on the same combination, although that is a subject for another post.

So here's the situation that USF faced: there was no real editor that could deal with it and nobody wants to write or maintain USF files by hand. Because of that, nobody actually uses USF, so there is no renderer that accepts it. Finally, USF offers very few real benefits over ASS, feature-wise. All in all, it just wasn't interesting to support it, and it faded into oblivion.

But here's an idea: Athenasub (the library that will be the backend of Aegisub 3.x series) will be completely format-agnostic. That would make it feasible to make Aegisub fully support USF, even its fanciest features, except that there is no renderer to display it. Should we bother? Is there still any interest in this format? At the moment, I have little interest in attempting to resurrect it (especially since we have our own plans with AS5), but if there is popular demand for it, I might reconsider. Last time I checked, not even the Matroska team seemed to care much for it anymore.

AS5. USF. And let's not forget Gabest's SSF. Do any of those formats have a future in the community? Or will fansubbers linger to ASS until the rest of their days?

## Tuesday, July 22, 2008

### Random code-snippets from VSFilter

I've been reading and hacking on the VSFilter code more than is probably healthy, and have over the time found a lot of funny/strange clips of code.

For example, this line in GFN.cpp (Get File Name):

CString filename = title + _T(".nooneexpectsthespanishinquisition");

In VSFilter.cpp you can find this gem:
/*removeme*/JajDeGonoszVagyok();

Do you know the "opaque box" background style supported? As an alternative to wide outlines? Well here's how it's created:
CStringW str;str.Format(L"m %d %d l %d %d %d %d %d %d",  -w, -w,  m_width+w, -w,  m_width+w, m_ascent+m_descent+w,  -w, m_ascent+m_descent+w);m_pOpaqueBox = new CPolygon(style, str, 0, 0, 0, 1.0/8, 1.0/8, 0);

Yup, it creates a drawing object, by a string. While it is a bit clever (the alternative would be much more code) it has some bad problems which you may have seen if you've used it yourself: If for any reason it needs to create multiple boxes, such as when you have multiple lines, the boxes will overlap and a non-zero alpha will make that look really bad.

I wonder how long this line has been sitting there:
// TODO: handle collisions == 1 (reversed collisions)

Maybe pre-buffering could be more useful if this was actually implemented:
STDMETHODIMP_(bool) CRenderedTextSubtitle::IsAnimated(POSITION pos){ // TODO return(true);}

I still find this the weirddest part... there is a CPP file with a stange name. It contains among other things this function:
#define LEN1 (countof(str1))#define LEN11 (countof(str1[0]))#define LEN2 (countof(str2))#define LEN3 (countof(str3))static void dencode(){ int i, j; for(i = 0; i < LEN1; i++) for(j = 0; j < LEN11; j++) str1[i][j] ^= 0x12; for(i = 0; i < LEN2; i++) str2[i] ^= 0x34; for(i = 0; i < LEN3; i++) str3[i] ^= 0x56;}

If you think it looks like XOR en/decryption you're right. That's exactly what it is.

It's from the file containing the void JajDeGonoszVagyok() function, and it's called "valami.cpp". This file also contains one other strangely named function: bool HmGyanusVagyTeNekem(IPin* pPin)

Both of those functions decrypt some strings written as arrays of numbers. These strings are the names of registry keys of other DirectShow filters. The JajDeGonoszVagyok function then detects the highest merit of all those filters and makes sure that DirectVobSub itself gets a merit higher than any of those... I think this is the DirectShow version of the "law of the jungle".

### So, what happened to the competition?

On July 9 2006, the last actively maintained ASS-Based General-Purpose Subtitling Software (henceforth ABGPSS) competition that Aegisub had - Sabbu - was dropped by its creator, kryptolus. Sabbu was an important program in the sense that it was the first ABGPSS to support Unicode and to be cross-platform. With Medusa and SubStation Alpha long dead, Subtitles Workshop being far from usable for anime fansubbing purposes, and SSATool designed for very specific purposes, Aegisub obtained monopoly on the ABGPSS business.

But what really happened? Was that a good thing? Let's take a quick look at all the related software.

Substation Alpha started it all. Written in Visual Basic, it had many advanced features for its time, and many timers still think that it's the best timing tool ever made (I was recently shocked to learn that some old-school fansubbing groups have been using Aegisub even for timing!). As revolutionary as it was, it was essentially useless for typesetting and had too many quirks for most users.

• What happened to it? It was discontinued many years ago by its creator, Kotus.
• Who still uses it? Many old-school timers still do, apparently, and won't replace it with anything else.
• Why was it important? It supported genlocks, but it also helped ignite the digisubs revolution. The current standard subtitles format is a direct descendant from SSA's own version 4 format, which is the source of many oddities in the format.

Medusa is the tool that I actually used when I was a "fansubber" (it's worthy pointing out that I was also a fansubber [sans-quotes] for a brief while). Medusa is infamous for its instability. Not only was it also written in Visual Basic, like its predecessor, but it managed to exploit that fact in new unique ways, making it infamous for its instability and propensity for misbehavior. It was such a marvelous tool that I (and many other typesetters) decided that it was better to simply typeset with good old Notepad+VirtualDub. This technique would later inspire Aegisub's video mode.
• What happened to it? kaiousama, its creator, apparently attempted to rewrite it from scratch into a greater abomination known as "ChronoSub", which would use the dreadful USF format as its primary format. He vanished after that.
• Who still uses it? Masochists. Aegisub was designed to replace Medusa specifically, so there is no real reason to use it, unless you are on Windows 9x.
• Why was it important? It was the first ABGPSS to support the Advanced Substation Alpha (ASS) format, and the first to include a video display for typesetting.

Sabbu was an important step in the right direction. This was the only program still in active development when Aegisub started, and that competition probably helped both programs grow faster - I know that Aegisub did benefit from it! This program made fansubbing in UNIX systems a possibility, and solved many of the problems from the older tools. However, it suffered from an unusual GUI, that many people could not get used to.
• What happened to it? It was discontinued 3 years ago.
• Who still uses it? Many timers believe that Sabbu's audio timing mode is as good as audio timing can get, and so they stick to it. Because of that, Aegisub 2.x series basically copied Sabbu's timing mode, so now both programs are almost identical on that aspect. (Except that Aegisub supports a few extra tools.)
• Why was it important? It was the first time that an ABGPSS was developed following modern trends and it was, for a while, the only option that UNIX fansubbers had.

So the situation now is that Aegisub has nothing to compete against. I do not deny that this is somewhat frustrating - many people claim that the entire fansubbing community is driven by fierce competition between groups, and the same holds true of its tools.

Sure, Subtitles Workshop does many of the things that Aegisub does - but it does many essential things very poorly, and has horrible support for ASS. Certainly, there are specific tools (many kept "in house" by paranoid fansubbers who actually believe that they have much to gain from that practice) to do many tasks, especially karaoke. Even SSATool is being incorporated into Aegisub ever since its developer joined our staff. But I miss the thrill of having a real, actively-developed tool to compete against.

Since the dawn of time (since before I started Aegisub in June 2005, that is), there have been rumors that a certain fansubber has been working on a certain fansubbing tool whose ultimate goal would be to replace Medusa (even the name implies that). Well, Medusa has, I believe, been replaced. Perhaps there is still hope for some fun game in the back stage of the community?

Maybe it's only natural that such projects would eventually die out - Sabbu was the only open-source amongst them, but, even then, kryptolus was the only developer. I hope that Aegisub survives for as long as subtitles and fansubbers are around, but I have to keep in mind that, statistically speaking, the odds aren't in my favor...

That said, remember that Aegisub is a free project - if you develop tools for the fansubbing community and would like to join our staff, we will always welcome developers who prove themselves capable of helping us. Ultimately, the goal of the Aegisub project is to be THE tool for all subtitling needs in the anime community.

## Sunday, July 20, 2008

### Why rendering \k and \kf effects is fast in VSFilter

This is a repost of something I wrote earlier on the AnimeSuki forums, in relation to a discussion of how much CPU time various kinds of karaoke effects take to render.

This discussion only covers TextSub (VSFilter), I don't know what other renderers do and their use is still very limited. Also, everything that goes for \k also goes for \kf, \K and \ko. They use the same rendering technique.
This will also explain a funny "artifact" some karaokers might have seen when using \kf with vertical karaoke.

First, while TextSub does have a function that should tell whether a line is animated or not (presumably so it could avoid re-rendering static lines for every frame) that function is empty, it just says "return true;", so every line is always considered animated, no matter what's in it.

Next, the way \k effects are handled is using a "switchpoints" algorithm.
TextSub renders (up to) three different single-channel 6-bit bitmaps for each line, fill, border and shadow. (Border is Shadow minus Fill. Shadow is Fill "expanded" to give an outline.)
When the subtitle is to be painted onto the video, TextSub builds a list of switchpoints for each line component. A switchpoint has two parts: Colour (which includes alpha) and end-coordinate. The end-coordinate is which pixel index on the scanline the colour is valid up till.
(When a line has a vector-\clip, the vector drawing is rendered as a fourth 6-bit image which is used to mask the other layers while painting.)

When there is no \k effect, there is only one switchpoint for each component, which has the colour of it and the end-coordinate set to infinity (actually 0xFFFFFFFF).
When there is a \k effect, the current position of the highlight is calculated for the frame, and a switchpoint is added at the right coordinate. This is very fast to calculate. The pixel size of every syllable is already known (because the rasteriser breaks the line into "words" at every change in formatting - \k tags are formatting) and for \kf effects, getting the position within the syllable is a matter of simple linear interpolation between the endpoints of the syllable.

Now for painting an actual component.
For every scanline of the component, loop over each switchpoint. For each switchpoint, paint its colour to the video frame, using the component as mask and optionally also masking with a vector-\clip mask. When the endpoint of a switchpoint is reached, do the same for the next switchpoint, continuing where the previous one left off.
This is repeated for every scanline of the component. Also very fast.
(The case of just a single switchpoint, ie. no \k effect, is questionably optimised by removing the switchpoints-loop. I think this in practice only saves a few hundred or maybe thousand machine instructions in total for each component, but I haven't checked the actual code.)

For the reason why \k effects don't rotate when you use \frz (and family): They are scanline-based and the switchpoints are assumed to always be on the same coordinate on every scanline. The switchpoints can't change between scanlines for the same component.

This should explain why \k effects are fast to render, unlike many \t-based effects. Using purely \k-based karaoke effects is safe to do when softsubbing, any modern CPU should be able to render it, since it doesn't really take any more CPU than rendering static lines.

### The future of Aegisub

Greetings to all readers, and welcome to our new blog!

I'm Rodrigo Monteiro (a.k.a. amz) and I've founded the Aegisub project together with Niels Hansen (jfs). Although I've written a good portion of all the code, lately real life has decided to get in my way and I haven't been contributing much - which is part of the reason why development has been slow.

But, to get to the point, this is what we're planning for the future of Aegisub:

1. We want a stable 2.2.0 release ASAP. Nobody should be using 1.10 anymore.
2. We want proper Linux, *BSD and OS X support. Although those three platforms work to varying degrees, Aegisub still works better in Windows.
3. A major infrastructure review, which will decouple all the subtitle parsing and manipulation into an external library tentativelly named Athenasub.
4. Implement even more features!
5. AS5.

I think that we're very close to point #1, and that depends mostly on jfs finishing the manual. On the UNIX front, we have verm porting the program to accomplish #2, but we still need more C++ developers to work on the actual features that don't work too well there - TheFluff has been trying to fix LAVC support, which is very problematic.

Point #3 is largely my responsibility. Athenasub will be a standalone C++ library that will load, manipulate and write subtitle files in many formats (all that Aegisub supports now, plus new formats, including image-based). It will probably also support some form of script similar to Avisynth, which could be used to edit individual subtitles from command line or do whole batches at once. While the library itself is coming along nicely, integrating it into Aegisub will be extremely difficult, but will hopefully make the program more stable and easier to understand (source-wise). It will also warrant a major version change, so look forward for that in 3.1.x.

Point #4 includes all those features that we've always wanted but never got around implementing... gradient and blur visual typesetting tools, a bleed checker, a script analyzer (that will search for any potential issues and display all of them in a list, with support for automation plugins), a character counter, and a few others.

Point #5 is probably the farthest in the future. AS5 is a subtitles format that is intended to replace the Advanced Substation Alpha (ASS) format, by adding many critical new features while overall simplifying the format. A draft specification is available here, but beware that it will certainly change much before it sees the light of day.

This is all that I can think of now. Perhaps jfs will have some more to say regarding his plans for the future of the program. Either way, I intend to detail those points more carefully in posts to come, so stay tuned.

### Aegisub development blog now open

This is a bit of an experiment, something we (at the Aegisub team) have talked about a few times: A development blog.
The idea behind this blog is simply to have somewhere to post longer texts about Aegisub development and subtitling/video/technology in general.

I hope it'll become a success, but let's see :)