Creating Highlights and POTG Videos With Free Tools
Hey folks! I know a lot of us like to share our best plays, funny moments, Play of the Game, etc.
However, many of us don't actually have much information on how to edit our raw video down to something that we can easily share on YouTube or upload to Gfycat. Or we don't have access to the tools that make it easy; many of the best ones are very expensive. Some are less expensive but still come with a price tag.
So, if you're able, why not do it for free?
With that in mind, I thought I'd share my particular workflow for taking a gameplay recording and turning it into a video for YouTube, or making a short AVI for conversion into a GIF via gfycat or another tool, using only free software- free as in beer, no strings attached, no malware, etc. FAIR WARNING:
This may be a bit technical. If you find it's too confusing, ask and I'll try to answer as best I can. In the end, there's simpler ways to do this but they usually come with a (often very hefty) price tag. All the tools I talk about here are free- no malware/adware, and no need to torrent a seven hundred dollar piece of copyrighted software. If you already own the commercial-grade software for doing this kind of thing, good deal. However, I think some folks would love to know how to do this both legit and free.
First, The Raw Video
I won't spend much time on getting the recording itself. I'm writing this for people who know how to record their game play but want to be able to do one or more of the following:
- Convert formats (from MP4 to AVI, for example)
- Edit for length (taking just the Play Of The Game clip from a longer highlight)
- Add basic effects like fade-ins and fade-outs, etc.
- Do basic editing like cuts, splices and so on.
- Optionally mix in their own soundtrack/voiceover.
So before you can take advantage of this tutorial, you'll need to have solved the "how do I record my gameplay" question yourself. Personally, I use NVIDIA ShadowPlay, but if you do not have an NVIDIA card, you may have to use other alternatives, like FRAPS, Open Broadcaster, etc. I invite those with insight to give their advice in the comments.
The Implements of Destruction
You'll need two tools- VirtualDub
, and a couple of additional pieces of software- a plugin for AVISynth called FFMS2 and a codec for VirtualDub called XVID. All are free- as in beer, no weird toolbars or malware.
Here's the links:
Get VirtualDub 1.10.4
, AVISynth 2.6.0
, and the latest version of XVID
. You may also want to look at AVISynth+. AVISynth+ is a fork of AVISynth with more features and better plugin support, but I haven't used it much myself, as 2.6.0 works for my purposes.
You'll also need FFMS2
, which adds the ability to read MP4 files to AVISynth. The GitHub link above will let you download the latest release as of this writing as a 7-Zip archive. I suggest you use the 32-bit versions of all these
. 64-bit support isn't always guaranteed across all these tools, and 32-bit will work just as well.
Download and extract VirtualDub (it doesn't need installing, just run the VirtualDub executable from the extracted folder), install AVISynth and XVID, then download the FFMS2 package- open it and extract the contents of the "x86" (32-bit) or "x64" (64-bit) folders in the archive to your AVISynth plugins folder- if you're using 32-bit, probably C:\Program Files(x86)\AVISynth\Plugins
. Again, I suggest you use 32-bit versions of everything.
What is this Stuff?
VirtualDub is a video processing tool. It's bare-bones, functional as hell, and does exactly what it's supposed to. However, it's not exactly a "pick up and run with it" kind of tool. It's not an editor like Premiere Elements or AfterEffects, but a processing tool that is used to run a video through a pipeline. It can do things like change size/aspect ratio, add or remove basic effects (only to the entire video, like a grain overlay, not adding fancy text effects), and to push the video through a codec that compresses it to an AVI file.
AVISynth is a "frame server". VirtualDub can read raw video files from your hard drive, but it can also read video data from a frameserver, which can provide the video data after doing its own processing on it. This will become more clear in the examples I'll provide, but for now, just note that AVISynth has no user interface. After installing AVISynth you can create text files that act as "scripts", which when opened as a video in VirtualDub will let VirtualDub read the video from AVISynth.
Uh, How About You Just Show Me An Example?
Yeah, that's probably for the best.
OK, so let's say you've been playing Overwatch, you have NVIDIA ShadowPlay enabled and you just pulled off an incredible POTG. You hit ALT-F10 to save your gameplay. What you get is a huge MP4 file, which is stored somewhere on your hard drive, like so: http://i.imgur.com/jZqvcy3.png
This file will contain the last 3, 5, 10 minutes, or whatever length you've set in your ShadowPlay options as how much to capture. Let's assume you want to take just your highlight out and create a video from that.
Well, first, I suggest renaming the file to something shorter (you don't have to, but it's easier to remember). As an example, I had a really awesome play as Zarya the other day, so I just named the file "owzpotg.mp4". Short and sweet. However, it's still 3 minutes long, and I only want about 30 seconds of it. Enter AVISynth and VirtualDub.
Basic AVISynth Script
With AVISynth, you're not editing stuff visually, you're just writing a script that tells the AVISynth engine what to do to what frames of the video. So, to start with just make a text file in the same folder as your captured video, name it "overwatch.avs" (change the .txt extension or it won't work), and put this line in:
FFmpegSource2("owzpotg.mp4", atrack=-1, fpsnum=60000, fpsden=1000)
This tells AVISynth to load your MP4 file, use the audio from the video, and to set the frames per second to 60. I have no idea why but for some reason, the FPS is calculated by dividing the fpsnum and fpsden values. I assume there's some kind of super-geeky reason for it that I don't understand. If you want 30 FPS just set it to 30000 and 1000 respectively.
Save this file and open VirtualDub. Select File | Open Video File...
and open the AVS file you just created. If you get an error, something's wrong with your script or possibly your plugins, but if all goes well, you should see something like this: http://i.imgur.com/zcOvrEw.png
This is the VirtualDub window- on the left is the "input" video (your raw MP4), and on the left is the "output" video (the video after VirtualDub's done with it).
The first thing to do is tell VirtualDub what codec to use in processing your video. To do that select Video | Compression... and you should see something like this: http://i.imgur.com/BelrrEw.png
Select the "Xvid MPEG-4 Codec" from the list. If you have other codecs, you may want to try them- there's a lot of good ones, including DIVX, The Combined Community Codec Pack (cccp-project.net), and others. Personally I like Xvid, because it's easy, fast and seems to be well-supported for playback on most platforms, being a fork of DIVX.
Anyway, click OK
Now, at this point, you could click File | Save as AVI...
and pick a file name and you'd have converted your raw footage into AVI format- it might be a smaller file size, but what we want is to capture your highlight.
There's two ways to do this.
Method 1: Use VirtualDub directly.
This is the easy-peasy method. Pros: Easy, fast. Cons: Smash cut into and out of video (no fade-in or other effects), no ability to string together multiple clips or do other effects.
To pick the range you want, grab the scrubber (the little gray box on the lower left) and move it until you've found the spot you want to start your highlight. To play the video, press the second play button (plays the processed video). The first plays the unprocessed video and with MP4 can be kind of jerky. When you get to the place you want to start the highlight clip, press the "Mark In" button (looks like a left-pointing half-arrow).
Here's a screenshot to show you what I mean: http://i.imgur.com/mEIANzd.png
Now drag the scrub bar to the end of your highlight, then click the "Mark Out" button. It should look something like this: http://i.imgur.com/LTcKMhs.png
Now if you click File | Save as AVI...
and pick a name, you'll get an AVI file of just that range, compressed with Xvid and ready for upload to YouTube, GFYCat or whatever else you want to do with it. Simple, basic, but has some disadvantages- you can't do effects like a fade-in or fade-out, you can't pick multiple ranges or stitch clips together.
Method 2: Use AVISynth Commands
OK, so to use this method, you'll need to do some more work with that AVISynth script we made earlier. First, open the script in VirtualDub and find the start and end of the clip you want. Just scrub the playhead to the point or use the play button (the one on the right) to locate where you want to start. Note the frame number- it's to the right of the buttons, and will say something like "Frame 10460 (0:02:54.333)[K]". You just need the frame number- note it down.
Now move the playhead to the end of your clip- where you want your highlight to end. Note the frame number there also.
Armed with the frame numbers, go back to your AVISynth script and change it to this:
RawClip=FFmpegSource2("owzpotg.mp4", atrack=-1, fpsnum=60000, fpsden=1000) TrimmedClip=Trim(RawClip, 8500, 10600) FadeIO(TrimmedClip, 60)
Note that the first line is pretty much the same except we've added "RawClip=" to the start- instead of just outputting the raw video, we're going to do more with it. Specifically, the raw clip will be trimmed using the Trim() function in AVISynth- starting at frame 8500 and ending at frame 10600. That line tells AVISynth "Trim the frames before 8500 and after 10600, and call the result "TrimmedClip".
Next we take the trimmed clip and pass it through a function called "FadeIO"- which means "fade in and fade out". It says "Take the trimmed clip, fade in over the first 60 frames (remember, we're at 60 fps, so that's one second of fade-in) and fade out over the last 60 frames.
Save your file and reopen it in VirtualDub. You can either click File | Reopen Video File or close it and reopen it directly. Either way works.
Now when you play the video in VirtualDub you should only see the trimmed video and you can see the fade in and fade out.
Make sure you've selected the Xvid codec in Video | Compression... (or your other preferred codec) and then you can save the video as an AVI.
I used this technique to make this video: https://www.youtube.com/watch?v=yiULQ77ozWs
So, you can now take your AVI file and upload it to YouTube- or you can send your POTG video to GFYCat and get a nice gif out of it. You can also spend some time looking at the AVISynth wiki and learn some of the other commands, so you can do things like splice together a highlight reel of all your favorite plays- and use VirtualDub to change the audio track.
Or create a highlight reel of all your favorite fails and use "The Spanish Flea" as the soundtrack.
The best part about this technique is that it's free and once you get past some of the initial hurdles, a pretty quick and easy workflow. Its downside is that it's not point and click friendly and is a bit daunting to anyone who's not familiiar at least a little with some basic scripting.
However, I hope it's helpful to the Overwatch community and helps produce more and better videos and highlights!
I thought this post might be some good inspiration for people interested in the Raspberry PI and not sure what first projects to start on.
I’m in Australia and ordered the RPi3 from RS Components which took well over a week to arrive but I still felt lucky to get one so quickly after public launch! Im a long time linux user but never got interested enough to tinker with the early models - but got interested because of retro gaming and improved availability of the RPi2. The performance boost of the RPi3 and onboard WiFi were what sold me to jump in and order a full kit including official red/white case and power supply.
While I was comfortable writing a disk image to an SD Card, the RPi3 release version of NOOBS was even easier with drop and drag onto a newly formatted card. Simple things like making this process so easy will help get more people involved without necessary having to download another application and step to get the OS loaded. The official page and video describe the process really well! https://www.raspberrypi.org/help/noobs-setup/
First boot was simple and I was dropped into the working desktop 5-10 minutes later and up and running. Very impressive! The latest release even includes a graphical desktop version of raspi-config which is exactly where you would expect it to be under the Preferences menu. This isn't well mentioned yet because most tutorials seem to be more general tutorials and often focussed on the RPi2 - but still perfectly valid.
If you are new to linux, then there are some great command to get started learning also on the official website. https://www.raspberrypi.org/documentation/usage/terminal/README.md
My first issue was getting my Wifi connected as I couldn't see my networks in the desktop connection and scanning widget. I run a high end Netgear D7800 ADSL Modem/Router with 2.4GHz and 5GHz B/G/N/AC WiFi networks but of course the RaspPi only supports 2.4GHz. Looking at the RaspPi forums this was probably due to my high channel selection (13) as the WiFi regional settings didn't allow access. I simply changed my 2.4GHz network on the Modem/Router itself to Channel 1 and immediately saw my network and connected without hassle.
I also read about the WiFi power saving issue that results in lots of connection issues - easpecially when connecting remotely. So this is fairly easy to solve by adding this line to /etc/network/interfaces in the wlan0 section:
I'm using a spare 32" LCD TV over HDMI and audio didn't seem to be working when accessing YouTube. Another search indicated that because the screen runs at 1366x768 I needed to enable to following option in /boot/config.txt by uncommenting (removing the # at the start of the line
I rebooted and that resolved all my basic issues and I was able to mess around with some initial project ideas. SSH and status monitoring
Secure Shell (SSH) allows remote command line access which lets you do anything with the RaspPi. https://www.raspberrypi.org/documentation/remote-access/ssh/
It is not essential to assign a fixed static IP address to your RaspPi (but it is a good idea so you know what address to always access without checking) but a simple command will tell you your IP address.
$ hostname -I
I installed the free RaspPi Check application on my Android phone and simply set up the IP address and username and password access. https://github.com/eidottermihi/rpicheck/blob/masteREADME.md https://play.google.com/store/apps/details?id=de.eidottermihi.raspicheck
Also installing an SSH application like JuiceSSH will give you command line access from your mobile device (while on your home or local network) to issue commands which can even be used to reboot or shutdown if things go wrong. Dropbox Folder Sync
There is no native dropbox application yet for the RaspPi because of the ARM processor architecture. However, you can get things going with python scripts!
My method has been to use the Dropbox Developer SDK, which involves setting up an "App" inside the Dropbox Dev Webportal and assigning the access key. https://www.dropbox.com/developers/documentation/python#tutorial
Install the dropbox SDK
$ sudo pip install dropbox
..then you can run a couple of test commands inside the python interpretter
$ python >> import dropbox >>> dbx = dropbox.Dropbox('YOU_ACCESS_TOKEN') >>> dbx.users_get_current_account()
You should get a bunch of information including your registered email address in response to the above command, which confirms you have the dropbox SDK up and running.
Then you can almost immediately start using the updown.py script from the examples. You can edit the file and put your Access Token into the script itself, or just run it with a bunch of command line parameters. https://github.com/dropbox/dropbox-sdk-python/blob/masteexample/updown.py
Note that updown.py is mostly a Pi TO Dropbox sync - it doesn't currently sync back down from Dropbox. But if you're handy with code or up for the challenge, this is a great starting point. I've made some edits myself to one of the default options (because I now run it with the -d option to make it automatic and not interactive with yes/no questions) as while the first upload works fine, if you update the file on the RaspPi itself, the script didn't automatically upload the new version. A change to line (102) of code (the word False to True) fixed that and not it works as my backup folder script.
Then you can also automate it by putting an entry in your crontab file (the UNIX method for schedule job control) to sync the game files every hour (or more) to the cloud.
Replace "pi" with your local userid (if different) that will run the script (and owns the files or appropriate permissions) and note that I have set all my default parameters and directories in the script file itself, so only need to call it with the "-d" parameter to make it non-interactive.
$ sudo echo "pi" >> /etc/crontab.allow $ crontab -e 0 * * * * python /home/pi/updown.py -d Bittorrent Server
Installing a basic Bittorrent server is really straightforward. Just install the main transmission-daemon package
$ sudo apt-get install transmission-daemon
Then edit the configuration file /etc/transmission-daemon/settings.json to add your local network to the RPC Whitelist to allow access over your network. In my case, I use the 192.168.1.x network but added the whole 192.168 range anyway straight after the default localhost 127.0.0.1 entry.
Reload the transmission service to bring it up.
$ sudo service transmission-daemon reload
Then access it on the Pi or other device web browser at the device IP address like 192.168.1.100:9091 and add torrent links straight from the browser.
While this set up is fine for light torrenting (and likely small SD cards), large torrenting needs need a few more things like an external hard drive set up. https://chirale.wordpress.com/2013/01/02/raspberry-pi-as-network-torrent-downloader-with-transmission/
My current issue that network interface is not fully up before transmission starts when the pi boots so I have to manually restart the service after boot up. There doesnt seem to be an immediate resolution yet and appears to be core release issue with buggy network state management. WiFi Printer and Scanner Server
I have a cheap USB PrinteScanner that works fine in linux as a local printer, but i was interested to see how well a RaspPi could be used to network enable a cheap printer. Turns out, it works really well and is also pretty straightforward. https://help.ubuntu.com/community/ScanningHowTo http://www.raspberry-pi-geek.com/Archive/2013/01/Converting-the-Raspberry-Pi-to-a-wireless-print-serve
Install the CUPS packages to support printing.
$ sudo apt-get install cups
Add the user pi to the printer administration group.
$ sudo usermod -a -G lpadmin pi
Then access the configuration webpage from the RaspPi web browser at http://192.168.1.100:631
. Go through the Add Printer process and tick the enable sharing option in the Add Printer process. Assuming printing a test worked from the RaspPi itself, then on the CUPS web administrate homepage, enable the Share Printers option so that the printer is exposed and accessible to the whole network. On my client machines, the Printer was simply listed as an available network printer and worked immediately.
Now getting network scanning working needs a little more config file editing. In /etc/default/saned set the default run behaviour to Yes.
Then also edit /etc/sane.d/saned.conf and add a line in for your local network address range to ensure remote computers have access. In my case, I’m using the same 192.168.x.x network format, which is why I have the .0.0/16 notation in the command below to allow access to this entire subnet.
$ sudo echo “192.168.1.0/16” >> /etc/sane.d/saned.conf
Now restart the sane service.
$ sudo /etc/init.d/saned restart
On a linux client computer, add remote host ip address to /etc/sane.d/net.conf so that the machine is aware of the network scanner.
$ echo "192.168.1.5" >> /etc/sane.d/net.conf
Search for the scanner on pi or remote machine
Which should return information about the scanner device. Then if this is working, you can simple scan an image into file with default settings
scanimage > test Retro Gaming with RetroPie
I won’t go into detail here, but I installed a separate SD card with the retropie image from the project website and followed the instructions to write the disk image file with the older style method. I ended up using the Windows method with Win32DiskImager after a failed attempt with the unix dd command line. http://blog.petrockblock.com/retropie/retropie-downloads/retropie-sd-card-image-for-raspberry-pi-2-2/ https://github.com/RetroPie/RetroPie-Setup/wiki/First-Installation
I recently also bought a couple of cheap USB SNES controllers from eBay that were easy to set up and I’ve been enjoying some real retro gaming fun when not tinkering with my other projects above. Unfortunately I’ve had a couple of issues with button reliability (nothing to do with RetroPi) with my first controllers - but for $7 each delivered in a couple of days during my initial excited week with the RaspPi, I didn’t expect it to be perfect. http://www.ebay.com.au/itm/2x-USB-For-PC-Mac-Windows-Emulator-Super-Nintendo-SNES-Controller-GamePad-JoyPad-/371360349831?hash=item5676cd3e87:g:FDYAAOSwT6pViUIO What’s Next?
My next small project will be to configure the RaspPi as a WiFi Bridge so I can connect some of my Ethernet-only AV equipment up to my WiFi network. There seem to be a couple of options - from a UNIX traditional IPTables based approach or bridge-utils based approach. http://www.glennklockwood.com/sysadmin-howtos/rpi-wifi-island.html https://wiki.debian.org/BridgeNetworkConnections
The “big” project (which is still quite small) I’ve had on my mind since the beginning is a WiFi Thermostat with temperature logging and graphing, web interface, email alerts at temperature thresholds and the possible extension to automate and allow remote access to my home heating and cooling. I’ll be basing this on a couple of great existing examples. https://learn.adafruit.com/adafruits-raspberry-pi-lesson-11-ds18b20-temperature-sensing/overview https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/temperature/ http://raspberrywebserver.com/cgiscripting/rpi-temperature-logge