Tuesday, 25 January 2011

Pass it on!

 Ok so I have taken quite a UI approach to setting up my scene to be given to the animators and others in the group. I have set up an Instruction node.
 I have arranged things into layers so that others can easily toggle on and off the visibillity of geometry and camera plate for testing and working with.

Here are the instructions, Placed into the attribute editor on the INSTRUCTIONS poly node.


The outliner is arranged well, with intuitive names to groups and objects within the scene.

Most geometry and camera nodes are locked off so that they are not moved whilst being used to remove problems with rendering later.




FINALY
if you've noticed the Sickly Green material on the geometry that is the default colour of the backgroundShader within maya which will receive shadows and reflections when rendering, but will not have primary visibility. These will need to be balanced on a 'per object' basis later before rendering, but they will work fine whilst testing.

Problem Solver



Ok so im all about solving problems and understanding things properly, so the problems i was having with the Matchmoving i have tackled by reading more, learning more and google troubleshooting. The problem was, that even though I have read about this many times in the last 2 weeks, and however much it was drilled into my head, I had just left the lens and camera information.

Maybe it was because i was worried due to my lack of knowledge or maybe i just forgot. even so I found this to be the problem. It turns out that my camera was doing a whole bunch of crazy things....jumping from nonsensical point to non nonsensical point. then i faced facts and spent 20 minutes finding out the focal length, lens size and height. I input this data and immediately things started to match up a little better. the camera was also in more or less the same place in the scene as sanjay was on the day of filming!

Lastly i went back over some of my tracks, created some point relations which helped place them better within 3D space. Here are some screenshots describing how i further improved the track and then the final matchmove to be submitted to my group for approval.




next I will go into how i have set up the scene to pass on to the animators.

Monday, 24 January 2011

Up up up the zigeral lickety split

ok so last time i left you i was working on the balcony scene, and i just finished it. Yes the time is 5am and i have been up all night really getting to grips with this new skill. first i'll show you what i came up with a few days ago, on the 22nd i think it was...



This wasnt too bad, certainly better than anything i could have done before 2011. But being citical, it is really quite shakey at times, and although not shown by the inserted geometry, there is a problem with depth. The depth is great at the point of the fence, but as you go closer and further than this point, there is an incorect depth occuring due in part to the lens size and focal length of the camera.

So i didnt call this a day with this shot, I reallllly wanted to crack it and understand it, not just 'wing it'! So in the next post I will go into how I solved and understood this problem.


Thursday, 20 January 2011

Matching Up camera moves and scenes

Ok so I put the footage I sorted out earlyer today into matchmover and after using the processes i have developed over the last week with my tests, I eventualy had enough track points at strategic places in my scene to sucsessfully solve the camera.

With this solved camera, i imported to maya and things matched up more or less perectly. on on e wall there was a slight bit of re-modelling, HOWEVER after consulting my notes from the day of the shoot, the set is actually much closer than what i had! With this scene more or less sucsessfully matchmoved, I will now go in and model all of the extra props needed so I can pass this scene on to our animators Joe and Nick.

Over all i am very happy with how this project is progressing. Our team is working perfectly with each other and everyone seems busy and happy. A couple of notes on what Ive seen on my collegues work is that our Animators are very good, they have completed some test walkcycles *seen earlyer in a matchmove test* with a downloaded rig similar to the one we will use. Our compositor is hard at work learning a new program from scratch using tutorials! our modelling department (ollie) is working at maximum awesomness and our creature is looking amazing! and finaly our rigging department (sanjay) is working very efficiently, following practices of another rigger, i forget who! but its looking really nice and fluid, the weights look great!

I am pretyy happy with my progress, I have solved one matchmove on our footage so far, and as for the footage itself I have made it workable (see interlaced and progressive problems earlyer) I feel very busy, and thats the way I like it. I've made sure that I know my responsibillitys for this project and Im working and planning accordingly.

De-interlacing

Ok so as i said i had to de-interlace our 25fps 1080i footage to become 50fps 1080p footage. After some internet research i found a couple of methods and ran through them both, and in the end used the most easy to understand. the first had me use filters and effects within premier, which i was unsure about as i did not want the footage to be evaluated differently in different places within the plate.

the second method had me import the footage into a 25fps 1080i project, then out put through adobe encoder as a 50fps 1080p tiff sequence. This was not a hard process to go through but its done wonders for our footage. Below is a slightly zoomed region of first the interlaced plate, then the progresive plate.

Interlaced:

Progressive


So what is Interlaced footage???

Ok so i have read up about interlaced and progressive footage and I now have a solid understanding of what is going on. First lets start with PROGRESSIVE.

Ok so all animators should be familiar with this. 25 frames persecond = 1 frame for every 1/25th of a second right? all tikety boo. now say you wanted higher quality, maybe step up to 50fps? so how many frames do you get now?? yep just testing you, 1 frame for every 1/50th of a second. ALL FINE.

Here is an example of some frames, for arguments sake imagine that this is a small snippet from a 50fps sequence:
F1 (50fps)









F2 (50fps)








F3 (50fps)







F4 (50fps)
F5 (50fps)








Hopefuly that all makes sense. If it dosnt maybe film and vfx are not for you....

But what about INTERLACED???

Ok so now say you wanted to broadcast, or stream 50fps of 1080p? sorry bub, aint gonna happen. wayyy too much data flying down the line. Enter Interlaced footage!

Interlaced footage is designed to produce a much clearer representation of motion, giving a fluidity to the picture, and removing any flashing or missed key frames, such as in sports. It is also designed to double the rate at which you perceive the fps of the footage. so somthing thats 25fps can look sorta like somthing thats 50fps! upscaling 25fps>50fps* and being able to broadcast at the same time!!! sounds good huh? NOPE.

I have a severe dislike for this format. remeber the image sequence of my head? ow lets see what the frames would look like with interlaced quality. (remember were going from 50fps to 25fps)

 F1 (25fps)
 F2 (25fps)
F3 (25fps)








WHAT THE HELL! why would you ever want this?? well its understandable that i dont like it. as I want to work in the vfx industry, imagine my disgust over blured, ghosty liney pixley horrible images...

but the fact is that this old format is still good in some respects, such as broadcast. However as soon as technology advances to the stage where 1080p 50fps can be sent into your telly box, its sure to die instantly!

So now to set about converting i to p.

Ohhh thats what that is!!!

Ok so i tried taking that line-y footage into matchmover knowing that there would be a problem. and surprise surprise there was. out of the thousands of auto tracks i applied to the image (with a minimum track length of only 25) only 10 or so stuck. and they were baaaaaad.



I then tried manually tracking one point and after 15 minutes I had half a track line that looked like crap. The problem was that I couldnt even see properly where to place the track points, the footage was that bad. (see photo below)




Enough was enough, I typed in the camera name and details of the fault, and spent a long while trying to find anything about this problem. That is until i found out it wasnt even a problem.....

It turns out what we shot in was 1080i, not 1080p. I was a bit unsure what the differense is between progressive and interlaced, but now I'm sure. Apparently Adobe Premier and after effects are set up to de-interlace footage, so I will research a little more and do two posts, one about i & p, one about how to convert 1 to the other.

Wednesday, 19 January 2011

Blocking out CG set in Maya

OK so I've now taken my illustrator floor plan into Maya and used it in conjunk with the vert snapping to accurately block out our walls and doors. Due to it being late, and not having as much data as i would like due to our tight filming schedule on the day,  there is still much to add in. Tomorrow my plan is to build in the last few recoreded areas of the plan, then see if i have enough footage (and subsequent tracking data) to add in the rest. If not, no problem, just back to uni! Here are some pics of what I have so far:




Refining FloorPlan

Ok so i have processed the raw on set dt into one easily managable floor plan in illustrator (black). i have then made an output file (red) to use in Maya as a curve based layout to work with. Here are jpeg versions of my ai files:

Raw:
Cleaned:

floorPlanning

I have begun planning for the CG set. I say planning, but basically i am just compileing all of the information and messurements i took down on the day of the shoot into one blueprint fro use in maya. I am creating a set plan in Adobe illustrator as .ai files can be taken straight into maya as curves to then use as refference when blocking in geometry. Here are my sketchbook images:




Filming and camera problems!

Ok so we went into uni yesterday and ran through everything we wanted to show a few times, then we ran through and filmed it a couple of times. When we looked back over the footage hpwever we noticed that although yes we were using a camera without the inherent cmos rolling shutter problem, there were a lot of unwanted lines on the footage, and an exsesive amount of motion blur. Speaking as the matchmover for this project, i wanted a cleaner result to work with, so we instructed our camera man Sanjay, to steady the camera in a way that everytime we pass a planned vfx shot, he must move as smoothly and as fluidly as possible. This results in some maybe unbelievable calmness in some shots, but a lot better footage to track in others. Our plan is to now cut and edit our 4 or 5 takes together to get the best result possible for both matchmove and story.

I will research into the exact problem we experienced with the lines on our footage in a fourth comming post!

Monday, 17 January 2011

a most sucsessful matchmove (results)

So now that youve revied my notes on what ive done today, here are my results. in my opinion they are 50% sucsessful. this is because I performed the industry standard process slightly backwards.

sometimes you will receive a mystery plate and you will have to make a scene for it, but a more practical way of doing this is to messure up all of the scene and model accordingly, then the second step is to solve a camera for this geometricaly perfect scene.

if both steps are perfomed correctly then everything should line up.

dynamics inserted into scene



Walkcycle provided by Nick Georgeou



Mashup

A most sucsessful matchmove. (workings)



Ok so today i went around gathering footage to work with and set to work on matching the footage. Above is the plate I used for my tests. Below are my notes taken today:


I have been playing with getting a single channel out of aftereffects and converting to greyscale, whilst improving the quality and removing the dirt from the plate.

I have now taken the plate into matchmover andapplyed an automatic track to see if there are any strong tracks. Due to the nature and length of the plate i think a strongtrack would be around 8 seconds.

I have now gonethroughand edited certainhard tracks so that they are stronger and therefore will solve the camera much better.

After custom tracking the origin point and a second custom track for the distance, i solved the coordinate system and resolved the camera. Then exported...

The results this time seem quite satisfactory. There is still much room for improvement as the depth of the camera is still off by some, however i have constructed a workable scene to animate in for the kitchen shot.

Somthing i plan to utalize next time are some custom trackson specific geometry to help me get the positions of corners and edges.

Whilst working with oject placements in maya, i was having many positioning problems. I wasnt able to guess the sizes, rotations and placements, and then it hit me, all of this data -comes- from matchmover. I picked a point, lined it up on the objects transform pivot and snapped it to the correct locator and bingo. No problems.

To test this scene i have inserted and ballanced an emiter to shoot out blobby surfaces. Connecting the a colide to the scene means that the particals will bounce about, that plus a bit of gravity. This render is meant to show my progress in understanding, not show off a perfect matchmove. Im sure that some objects will be out of place, however in comparisson to my earlyer tests this should be a lot more accurate




Sunday, 16 January 2011

Editing footage before working with it!

Ok so a lot of what i have been imersing myself in has talked about editing your footage before you take it into a piece of matchmoving software. I have done this twice without letting you know.

The benefit of doing this is that you are sometimes able to drag out a better track from just using one channel, or converting to a greyscale (or single channel) sequence. so far this has worked ok, however i have noticed somthing interesting:

by converting to a grey scale image, it creates a fine contrast of automatic tracks. most auto tracks will not pass 2 seconds, however you will now get one or two that run the full length of the footage or plate. This can be interesting when used with information taken of another plate.

ofcourse all this can then be discarded and you can revert back to your original colour footage for further editing when compositing with your 3d sequence.

CMOS = The problem with rolling shutters

Ok, so everything i have read and watched so far about matchmove has said to watch out for rolling shutter cameras. first let me tell you what i have learned about rolling shutters.

A rudimentary explanation of what is going on is that instead of an old analog method of photography and filming where a single frame is affected absolutly in the imediacy of the shot (however long the shutter speed) the image is 'imprinted' onto the film at the same rate all over.

in a CMOS digital camera, when the shutter opens light passes into a chip and the chip then has to right this infomation in a sequence. (imagine holding down the A key in MS word and watching the page fill up.) each row takes a fractional amount of time to process and therefore there is an offset of time in the photo. This is fine for a single image, un noticeable. however at the bottom of the page i will include my own diagrams, then 2 videos to show the problem with this effect.

I have drawn a rudementry picture to explain this effect. bare in mind this happens on a pixel ROW basis, no where near the result of this diagram.

ORIGINAL ANALOGUE FILM CAMERA
1st (1.00 seconds) pixel row


2nd (1.00 seconds) pixel row


3rd (1.00 seconds) pixel row


4th (1.00 seconds) pixel row


5th (1.00 seconds) pixel row ect....





CMOS DIGITAL CAMERA ROLLING SHUTTER EFFECT

1st pixel row (1.00 seconds)



2nd pixel row (1.01 seconds)




3rd pixel row (1.02 seconds)



4th pixel row (1.03 seconds)




5th pixel row (1.04 seconds) ect....




 So as you can see there is a huge problem when it comes to film, the clips below are taken from a forum and are not mine, though they explain the problem with this effect perfectly:






If you own these videos please let me know and i will happily delete them, or credit you appropriatly, however im sure you'll understand i am taking no credit, simply helping make people aware of a problem.

adding a dynamic outlook

Ok so its all very well making a nice poly cube and placing it in the scene, but what about animation?


see the biggest challenge of matchmoving ive seen so far is accuratly re-creating a real scene from the data you get. I've learned a little more about hard tracks so my MM is getting better, how ever I am still getting a slightly in-accurate depth than the real life scene. This video has some sucsess to it in what i was trying to accomplish; to get animation in there to explain depth, however like i say i havnt mastered re-creation yet.

heres my results this time

Shorter shots = practice perfection



Ok so i've tried to manually matchmove a couple of 10-15 second shots so far, and the biggest problem ive been having so far is that there is way too much info flooding though the software at once for me to make sense of.

So i took the logical route and tried this 2-3 second shot. The result is that I understand alot better the process of solving the camera and balancing the co-ordinate system.

So here is the result i got from the shot shot test. I also rendered and compd it a little so enjoy the shadow!

Thursday, 13 January 2011

My notes on Digital Tutors introduction to matchmover

2d trackers are made of 2 parts, a pattern area and a search area. Trackers on sequential frames take the pattern area in frame 1 and search for it in frame 2 within the search area.

Manual tracking is the best as automatic can create alot of data.

Import a sequence

Define camera fps and focal length type if known.

You can either loop or ping pong.

Have a look at rolling shutter problems.

Experiment with automatic tracker default settings.

Export to many different app. Maya will work for us.

Export all tracks that match.

Matchmover interface................:
Alt is the zooming button.
Green lines mean that matchmover is happy with with the tracking path. Yellow means that it is unsure. Red means that matchmover thinks its a bad track.

White cross is 2d space, yellow is 3d space.
Grey tracks are unused tracks. Red tracks has bad areas.

Righht click 2d3d box for camera bookmarks.

C to lock on camera.

Match move DOES give out bad results if you let it.

Controlling tracks through contours

Tests should be 2 seconds or less.

Hd is better than sd obviously.

Use contours to draw out masks where you do not want automatic tracking. Simply moving it will set keyframes on different frames. It is a good idea to do this for elements such as skys or moving objects

You can manually track an object by setting a point on it and tracking back fourth or both.

Circular outline means manual or hard track. Manual tracks get higher priority of solving than auto tracks

A combo of automatic and manual can be really quite efficient!

Adding coordinate system

Maya works off of cm

Set the origin of the coordinate system as somthing near the floor.

Messure on set and set up the co-ordinate system.

Normal IS WHAT I THOUGHT.

Through 2 points works well a lot of the time.

Its best to choose green points rather than yellow or red. The most data is the best.

If the y is upside down just switch the dirrection of the through 2 points

So yeah always do cords before exporting

After adding a manual track to help an auto, hit solve for camera.

To change an auto track to a manual, check hard tract in its attributes.

There are always going to be problems with tracks. For these ypu can use the traxker cleanup assistant.

To clean 3d tracks
Matchmover needs ATLEAST 8 trackers, and long life span trackers are best.

Clean up with a couple of options of lenth, residual, track points per frame and keep hard tracks

So to use this, use auto mapping without the camera , insert or choose hard trackers, cleanup then solve.

Using multiple keyframes before tracking forward or backward.

To add ij 3d objects just select one of the cubes/plane/ect.

To render out a preview go to 3d scene and select render setup. Save file, set settings. If theres an error, select render from the 3d menu.

Use quicktime to check for drift before you go into maya.

Wednesday, 12 January 2011

I am jacks complete lack of understanding

ok so yesterday I tried a couple of match moves, the first with the same footage i took as last time. This had very poor results. It turns out it is very easy to track a couple of points through 3d space via 2d video if there isnt going to be any complex geometry/animation in there. its a whole other ting to want to do it PROPERLY.

The second attepmt I tried yesterday was following a new set of tutorials from digital tutors. Unfortunatly I tried to jump the gun and get working right along with them, which meant I completely neglected a very important aspect of taking the footage which i then geme to regret heavily later. These are my results. Horrible images of chaos.

At this point I am completely happy with my complete and utter ignorance with this software and process. I still only have an outsiders view. However tomorrow my NEW BOOK arrives which is completely on matchmoving and from then on I shall be improving daily (a promise to myself)!!!


Monday, 10 January 2011

rotoscoping; Motion tracking the old fasioned way

It just occured to me that I did some of this without knowing it during first year. We were set the task of creating a walkcycle in maya and to make things a little more interesting i modeled and rigged my character to walk around in my hallway in my house.

To do this i took the video, then used aftereffects to generate myself an image sequence. i plugged this image sequence into the cameras image sequence and then keyed the cameras translates and rotates so that it seemed to shine light into my scene. using this as a basis i modeled a wall and ground plane to reference whilst animating, then rendered and composited back in aftereffects. Here is what I got, bear in mind i was first year yeah? ;)

lets get cracking!

Ok so i know term hasnt started yet, but i wanted to get going with this Match-moving stuff as soon as possible to help move things along for me and my group. So i viewed a set of tutorials on youtube about motion tracking and match-moving and I though I would start by defining the difference between the two.

Motion tracking:

This means selecting an area of pixels on a 2D video or image sequence within a program such as aftereffects, and telling the computer to trace their motion through time. This can be quite effective for things such as video clean-ups such as removing unwanted sections of video for use in production, or another example given in the tutorials i was looking at is to blur out peoples faces. Below is an example of how I used Aftereffect's motion tracker to apply an area of pixelation to my face as a sort of unanimity or protection function.



MatchMoving:

This is when you use software such as Autodesk's MatchMover to generate a 3D space from the motion of pixels across a piece of 2D footage. By setting up tracking points methodically you can simulate some accurate 3D depth and then take this information into a program such as maya to add 3D objects. Using the camera which is also generated you can then render and composit these directly into the 2D video with convincing results. Here is a quick example I made after following my first tutorial on matchMoving. It is pretty poor but I am pleased for a first attempt to be sucsessful atall!



I have enjoyed these two tasks I had set myself to give me a better understanding and I plan to progress with my understanding over the next few days. I have ordered a book published recenly in 2010 on matchmoving which should give me some better insight than this free tutorial.