Note: for each of these links you can click or scroll down for more
Alternatively, the video below demonstrates the steps in motion... https://www.youtube.com/embed/Z0PZc7WqPtE
And that's it.
Just note that both technologies are unstable as of September 2016
Swift is a fairly immature programming language
Due to the absence tools like a beautifier - though third-party tools such as Haaakon's SwiftFormat https://github.com/haaakon/SwiftFormat does the trick
Due to missing tangible desires such as bug reports & support from fellow Windows users of Swift (of which there is none)
And with breaking changes each major version of Swift (which might be why the Swift 1.0 code you found won't work on Swift 2.2 or even Swift 3.0 beta).
Start Menu -> Search "Turn Windows features on or off"
Scroll Down to "Windows Subsystem for Linux (Beta)", and click the checkbox
Restart your computer, it should display a screen along the lines of "Configuring Windows", similar to a Windows Update
Once your computer has restarted, launch a normal Windows Command Prompt: Start Menu -> Search "Command Prompt" (or alternatively, cmd)
Type the command "bash", and press enter
Accept the T&C (or not, skipping this section and booting into Ubuntu is effectively the same)
Wait for the ~200MB Ubuntu Subsystem Image to Download, Extract and Install
And that's it! A mini non-Linux environment within Windows. Similar to how OS X does it, only 10 years behind.
Takes around 30 minutes.
Those Linux commands
With how administrative/root privileges work in WSL, you don't need "su/sudo" to run what are normally root commands (if you've skipped setting up the root user). apt-get install build-essential should work on its own.
The build-essential package is required to run Swift. Without it, Swift would just fail in executing scripts. Unfortunately, this is quite a large package on Ubuntu, and even on Windows its no exception.
So run the command and wait it out, a tip to speed up the process of downloading packages is shown in the video.
Running apt-get install clang allows for Swift code compilation support in Windows via swiftc (swiftc with build-essentials alone does not work). Interestingly the resulting binary that swiftc compiles is a native Linux/Ubuntu ELF instead of a Windows exe.
Takes around 30 minutes.
Swift
Download Swift from here: https://swift.org/download/, note that you'd want the Ubuntu 14.04 version, if still offered.
Extract the archive just like you would on Ubuntu (or extract it using the Windows method, whichever you prefer), and run /usr/bin/swift through bash.
Takes around 10 minutes.
And that's it! Swift (the programming language) running on Windows 10 using the Windows Subsystem for Linux!
Although REPL doesn't really work, and you're not exactly working with native Windows goodness.
BinChunker is an application that converts .bin "Disc Image" files to .iso with the help of a .cue file.
The application is unfortunately Unix-only (which includes Mac OS X and Linux) due to the use of a number of non-standard C headers - in fact, the code will fail to compile on a Windows "cross-native" Linux layer like MingW64
$ gcc bchunk.c -o bchunk
bchunk.c:61:24: fatal error: netinet/in.h: No such file or directory
#include <netinet/in.h>
^
compilation terminated.
Fortunately, a fix has been published by mzex for use of BinChunker on Windows, however it does not come with the source code (only a binary encoded in base64 format).
The "python.xml" file should be placed in "C:\Program Files (x86)\Notepad++\plugins\APIs" or "C:\Program Files\Notepad++\plugins\APIs", overwriting the pre-installed copy.
Tags:
windows force scroll lock
microsoft wireless keyboard scroll lock
microsoft arc keyboard scroll lock
trigger scroll lock without keyboard
trigger scroll lock
Apparently, one of the hardest video-editing task to do with a script is to create a dynamically-timed slideshow without any fancy-drag and drop GUIs.
With Adobe After Effects, you cannot dynamically load external images using an expression (they will need to be loaded into your project beforehand, and even then, you cannot load the image into a comp with an expression).
And adding hundreds of layers of images and having to go through each and every one of them to edit the expression is a fairly tedious task.
Worse of all, every change that you make - such as adding a new image to the slideshow - will compound towards the chore of doing things manually.
With "Python: PIL to mp4", a simple blending transition was created using PIL and OpenCV, But the objective of this post is to introduce timings to delay the animation for numerous/multiple images.
We can extend this idea of having a primitive transition to allow for an image to be delayed from transitioning until a certain amount of time has elapsed, and to allow the transition to occur after "x" amount of seconds, hence forming a slideshow.
Process
Initialization
So to start off with, we're going to need some data to work with.
Since it's Python, you can do whatever you want to feed data in - you could use a JSON file, CSV, Pickle, whatever you're comfortable with, or perhaps, whatever arbitrary file format that you're locked into using.
But here, a basic python array will be used to indicate the timings and image file that will be fed into the slideshow, amongst other data...
As you can see in the data above, the most relevant data is songData[][0] and songData[][4], indicating the timings (in seconds) and the image file locations, respectively.
We're going to set the FPS of the slideshow... 60FPS is the standard nowadays, so we're going to set that and process the songData above to reflect this...
FPS = 60 # Sets the FPS of the entire video
currentFrame = 0 # The animation hasn't moved yet, so we're going to leave it as zero
startFrame = 0 # The animation of the "next" image starts at "startFrame", at most
trailingSeconds = 5 # Sets the amount of time we give our last image (in seconds)
blendingDuration = 3.0 # Sets the amount of time that each transition should last for
# This could be more dynamic, but for now, a constant transition period is chosen
blendingStart = 10 # Sets the time in which the image starts blending before songFile
for i in songData:
i[0] = i[0] * FPS # Makes it so that iterating frame-by-frame will result in properly timed slideshows
Now the first image is going to be loaded in by the script - as so:
im1 = Image.open(songData[-1][4]) # Load the image in
im2 = im1 # Define a second image to force a global variable to be created
current = songData[-1][4] # We're going to let the script know the location of the current image's location
previous = current # And this is to force/declare a global variable
And next up is to create the actual OpenCV video handling capability. You can have a read-up about this here: Python: PIL to mp4
height, width, layers = np.array(im1).shape # Get some stats on the image file to create the video with
video = cv2.VideoWriter("slideshow.avi",-1,60,(width,height),True)
So that was the basic initialization routine. If you don't get how it works together yet, don't worry. Just read on - as the full code with everything combined is below.
Main loop
So the strategy behind generating this slideshow is to loop through each and every frame and continuously feed that into our output video file. Sure some corners can be cut - by which you only generate the transitions (leaving the gaps to be manually filled by an external program) - but this post is looking more into automating the entire slideshow generation process with only Python, PIL and OpenCV.
We're going to have a main while loop that sets the limit on how long our slideshow should last.
while currentFrame < songData[0][0] + FPS * 60 * trailingSeconds: # RHS defines the limit of the slideshow
And this is where the nitty gritty kicks in: the actual code that makes the transition between each image within the slideshow...
for i in songData: # Loop through each image timing
if currentFrame >= i[0] - (blendingStart * FPS): # If the image timing happens to be for the
# current image, the continue on...
# (Notice how songData is reversed)
# The print statement adds some verbosity to the program
print str(currentFrame) + " - " + str(i[0] - (blendingStart * FPS)) + " - " + i[2]
if not current == i[4]: # Check if the image file has changed
previous = current # We'd want the transition to start if the file has changed
current = i[4]
startFrame = i[0] - (blendingStart * FPS)
# The two images in question for the blending is loaded in
im1 = Image.open(previous)
im2 = Image.open(current)
break
# See: http://blog.extramaster.net/2015/07/python-pil-to-mp4.html for the part below
diff = Image.blend(im1, im2, min(1.0, (currentFrame - startFrame) / float(FPS) / blendingDuration))
video.write(cv2.cvtColor(np.array(diff), cv2.COLOR_RGB2BGR))
currentFrame += 1 # Next frame
The ending to this program is pretty self-explanatory...
# At this point, we'll assume that the slideshow has completed generating, and we want to close everything off to prevent a corrupted output.
video.release()
All together now!
So here's all the code required to create a timed image slideshow with PIL and OpenCV v2!
Code:
from PIL import Image
import cv2
import numpy as np
songData = [
[390, u'Fractal', u'Itvara', 'minimix', u'image1.jpg'],
[322, u'Case & Point', u'Error Code', 'minimix', u'image2.jpg'],
[261, u'Excision & Pegboard Nerds', u'Bring the Madness (Noisestorm Remix) [feat. Mayor Apeshit]', 'minimix', u'image3.jpg'],
[157, u'Nitro Fun', u'Final Boss', 'minimix', u'image4.jpg'],
[88, u'Astronaut', u'Quantum (Virtual Riot Remix)', 'minimix', u'image5.jpg'],
[0, u'Fractal', u'Contact', 'minimix', u'image6.jpg']]
FPS = 60 # Sets the FPS of the entire video
currentFrame = 0 # The animation hasn't moved yet, so we're going to leave it as zero
startFrame = 0 # The animation of the "next" image starts at "startFrame", at most
trailingSeconds = 5 # Sets the amount of time we give our last image (in seconds)
blendingDuration = 3.0 # Sets the amount of time that each transition should last for
# This could be more dynamic, but for now, a constant transition period is chosen
blendingStart = 10 # Sets the time in which the image starts blending before songFile
for i in songData:
i[0] = i[0] * FPS # Makes it so that iterating frame-by-frame will result in properly timed slideshows
im1 = Image.open(songData[-1][4]) # Load the image in
im2 = im1 # Define a second image to force a global variable to be created
current = songData[-1][4] # We're going to let the script know the location of the current image's location
previous = current # And this is to force/declare a global variable
height, width, layers = np.array(im1).shape # Get some stats on the image file to create the video with
video = cv2.VideoWriter("slideshow.avi",-1,60,(width,height),True)
while currentFrame < songData[0][0] + FPS * 60 * trailingSeconds: # RHS defines the limit of the slideshow
for i in songData: # Loop through each image timing
if currentFrame >= i[0] - (blendingStart * FPS): # If the image timing happens to be for the
# current image, the continue on...
# (Notice how songData is reversed)
# The print statement adds some verbosity to the program
print str(currentFrame) + " - " + str(i[0] - (blendingStart * FPS)) + " - " + i[2]
if not current == i[4]: # Check if the image file has changed
previous = current # We'd want the transition to start if the file has changed
current = i[4]
startFrame = i[0] - (blendingStart * FPS)
# The two images in question for the blending is loaded in
im1 = Image.open(previous)
im2 = Image.open(current)
break
# See: http://blog.extramaster.net/2015/07/python-pil-to-mp4.html for the part below
diff = Image.blend(im1, im2, min(1.0, (currentFrame - startFrame) / float(FPS) / blendingDuration))
video.write(cv2.cvtColor(np.array(diff), cv2.COLOR_RGB2BGR))
currentFrame += 1 # Next frame
# At this point, we'll assume that the slideshow has completed generating, and we want to close everything off to prevent a corrupted output.
video.release()
Sample output
So with all the code above, it begs the question, why do I need to create a slideshow using scripts?
Well, here's a little sample of what you can do with a simple little slideshow.
Note the timings from "songData",
If you're having trouble piping image data frame-by-frame into FFMpeg (with the subprocess module), you may be interested in another way to convert python image data into a movie without having to store each individual frame as a file (whether it be .png, .jpg, .gif sequences).
However, this method is a little more complicated then attempting to get FFMpeg to work with python, so here's a little scenario to help with whether or not this tutorial is for you.
Let's say you're in a scenario, where you've encountered one of these errors:
"AttributeError: 'Popen' object has no attribute 'proc'"
Please note that the OpenCV version used is version 2, which uses
import cv2
as the import statement, as opposed to something like
import cv
There may be an update to OpenCV that breaks the code like with the answer found in the following link. image - Python JPEG to movie - Stack Overflow.
But without further ado, let's jump right into it!
Imports
We're going to be using PIL for loading and manipulating the images, NumPy for a PIL-to-OpenCV bridge, and OpenCV Version 2 for the actual image-to-movie process.
So our imports will look something like
from PIL import Image
import numpy, cv2
Manipulation
Obviously, the purpose of using python to convert from PIL to a movie is to be able to manipulate the image frame-by-frame using the power of PIL.
So here, I'm going to demonstrate some basic image blending functionality, just to provide a basis for this tutorial.
Here, we have two images, demo3_1.jpg and demo3_2.jpg...
demo3_1.jpg
demo3_2.jpg
With PIL, you can do something like
# Imports can be found in the "Imports" section above
# Load up the first and second demo images
image1 = Image.open("demo3_1.jpg")
image2 = Image.open("demo3_2.jpg")
# Create a new image which is the half-way blend of image1 and image2
# The "0.5" parameter denotes the half-way point of the blend function.
images1And2 = Image.blend(image1, image2, 0.5)
# Save the resulting blend as a file
images1And2.save("demo3_3.jpg")
In order to use PIL to blend two images together, in this case, the two images are blended at the halfway point, which means that half of each image is merged to become the resultant blended image.
demo3_3.jpg
This is a primitive version of an "additive blend", so you can think of it as an "additive frame blending using PIL" - note also that ImageChops isn't used for simplicity, but that's some additional power that you can use to manipulate images with.
Writing a video with OpenCV
With OpenCV, there's a "VideoWriter" method which you can access to create a movie with.
The method goes something like this:
video = cv2.VideoWriter(filename, codec selection, frames per second, (width, height))
Writing to the VideoWriter can be done with "video.write( numpy array as string )", and the VideoWriter can be "closed" by using "video.release()".
And that's all you need to know to convert from PIL to mp4 (or at least a movie, you need a certain codec for conversion to mp4).
All together now
With all the elements from the sections above in mind, here's the code in action
# Imports can be found in the "Imports" section above
# Load up the first and second demo images, assumed is that image1 and image2 both share the same height and width
image1 = Image.open("demo3_1.jpg")
image2 = Image.open("demo3_2.jpg")
# Grab the stats from image1 to use for the resultant video
height, width, layers = numpy.array(image1).shape
# Create the OpenCV VideoWriter
video = cv2.VideoWriter("demo3_4.avi", # Filename
-1, # Negative 1 denotes manual codec selection. You can make this automatic by defining the "fourcc codec" with "cv2.VideoWriter_fourcc"
10, # 10 frames per second is chosen as a demo, 30FPS and 60FPS is more typical for a YouTube video
(width,height) # The width and height come from the stats of image1
)
# We'll have 30 frames be the animated transition from image1 to image2. At 10FPS, this is a whole 3 seconds
for i in xrange(0,30):
images1And2 = Image.blend(image1, image2, i/30.0)
# Conversion from PIL to OpenCV from: http://blog.extramaster.net/2015/07/python-converting-from-pil-to-opencv-2.html
video.write(cv2.cvtColor(numpy.array(images1And2), cv2.COLOR_RGB2BGR))
# And back from image2 to image1...
for i in xrange(0,30):
images2and1 = Image.blend(image2, image1, i/30.0)
video.write(cv2.cvtColor(numpy.array(images2and1), cv2.COLOR_RGB2BGR))
# Release the video for it to be committed to a file
video.release()
Note that when you run the code above, you'll get a prompt for codec selection...
It is possible to find a codec for direct .mp4 conversions, however here, the default "Intel IYUV" codec was chosen and used.
From this stage as well, you can use FFMpeg (outside of Python) or your favourite video conversion program to convert from the codec that you selected to the format that you want, which can indeed include .mp4 files... ImageMagick was used to convert the output to the gif below:
And that's it!
Note that this post was made specifically for OpenCV v2, as documentation online were frustratingly for OpenCV v1, which has different methods and such.
As a disclaimer, things might change in a newer version of OpenCV, so this information is correct as of July 2015.
Covered Topics:
PIL to mp4 conversion
PIL to movie conversion
PIL/Python/OpenCV Video file from Images
PIL/Python/OpenCV JPEG to movie
Creating an OpenCV movie from PIL images
Converting an OpenCV movie into PIL Images
Suppose you have an image that has been manipulated with the Python Imaging Library, and you want to convert that image into a format that can be understood by the OpenCV Version 2 Library.
To do that, as of OpenCV v2, you can use the NumPy array as an intermediary format between the two libraries, where NumPy can convert PIL data into the NumPy array format, and OpenCV v2 can recognize the NumPy array natively.
To demonstrate this conversion, here's some code.
# First you need to import the libraries in question.
import numpy
import cv2
from PIL import Image
# And then you need a PIL image to work with, for now, an image from a local file is going to be used.
PILImage = Image.open("demo1.jpg")
demo1.jpg
# The conversion from PIL to OpenCV is done with the handy NumPy method "numpy.array" which converts the PIL image into a NumPy array.
opencvImage = numpy.array(PILImage)
# Display the OpenCV image using inbuilt methods.
cv2.imshow('Demo Image',opencvImage)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Which results in:
However, as you can see in the demonstration, the output OpenCV image turned a little weird, with the colour not matching the original PIL image (in the sense that the OpenCV image having the wrong colours). You can try this out for yourself...
This is because we're dealing with a multi-channel/"RGB" format and not a single channel image file, and a conversion from PIL to OpenCV involves a little bit of additional translation, this can be solved with OpenCV's "cvtColor" method.
To demonstrate this additional translation, here's some code.
# First you need to import the libraries in question.
import numpy
import cv2
from PIL import Image
# And then you need a PIL image to work with, for now, an image from a local file is going to be used.
PILImage = Image.open("demo2.jpg")
demo2.jpg
# The conversion from PIL to OpenCV is done with the handy NumPy method "numpy.array" which converts the PIL image into a NumPy array.
# cv2.cvtColor does the trick for correcting the colour when converting between PIL and OpenCV Image formats via NumPy.
opencvImage = cv2.cvtColor(numpy.array(PILImage), cv2.COLOR_RGB2BGR)
# Display the OpenCV image using inbuilt methods.
cv2.imshow('Demo 2 Image',opencvImage)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Which results in:
And that's it!
Note that this post was made specifically for OpenCV v2, as documentation online were frustratingly for OpenCV v1, which has different methods and such.
As a disclaimer, things might change in a newer version of OpenCV, so this information is correct as of July 2015.
Covered Topics:
PIL to NumPy conversion
PIL to cv2 conversion
PIL to OpenCV conversion
Adaptors.PIL2Ipl alternative/replacement/not working
Creating an OpenCV image from a PIL image
Converting an OpenCV image into a PIL Image
After plenty of time spent messing around with Control Panel
The basic "tutorials" only show you how to change the amount of lines scrolled at a time, but does not deal with the problem regarding the mouse scroll wheel sensitivity
After restarting/logging off a few times (logging off is apparently supposed to apply the registry changes that you've made)
And even after re-plugging the nano-transceiver device,
Nothing seemed to work to fix the delayed/slow scrolling with the Microsoft Mouse...
This is the Windows Store app. Or, at least it would be if it worked. For now, it's just a splash screen...
Well, if would still be a splash screen if it wasn't for the fact that after a long wait, an error message appears.
The one I got first was:
We weren't able to connect to the Store. This might have happened because of a server problem or the network connection might have timed out. Please wait a few minutes and try again. (0x80072ee2)
The next attempt in opening the store app resulted in another error:
We weren't able to connect to the Store. This might have happened because of a server problem or the network connection might have timed out. Please wait a few minutes and try again. (0x80190190)
Note how, in both cases, the error code was different...
So if you're currently encountering an error message with the same error codes, then you're in luck, since there is a fix! If the error code and problem is similar (but not exactly the same) then in any case just have a read through and see if your problem gets fixed.
If you've ever dealt with CHM files, then you would already know that those letters are short for "Compiled HTML", which are typically used as a resource to HELP or guide users through a particular set of problems.
So we all know Python. Its a fun extension to the C language (Read: a language built on top of C). But one question, is, why can't we compile Python into an exe file, just like with C?
Now Python is an interesting language, in that it is both compiled and interpreted. In fact, there are many different implementations of the language. We have PyPy, on one hand, which does both, IronPython, which is basically a .NET compiler for Python, but the main focus is on CPython - Or what we all know as "Python". CPython is the "main reference implementation" of Python, and is the "Python" that is downloaded at https://www.python.org/.
It's weird to think that "Python" is not just Python, but makes sense considering the fact that Python is open source. A similar comparison between PyPy/IronPython and CPython can be made with JavaScript (js), where js is implemented not as "JavaScript", but as V8, spidermonkey, rhino and much more. We've come a long way from the monopoly that Microsoft has from their own programming languages, C#, VB.net and VBScript.
If you're finding a way to hide hard drives on windows xp, 7, 8, 8.1 or any other windows devices, then there's a simple solution that doesn't involve installing or running third party programs.
In fact, if you've looked around, then the only feasible option is to use the windows registry to set flags on which hard drive to hide inside of explorer.
Disclaimer: Using the registry is dangerous, however its the lesser of evils between the choices of hacking about with the group policy (which you can't actually do without Windows 7 basic, starter and home premium, e.t.c., using command prompt to fiddle around with an even more volatile environment.
Make sure you backup your registry before continuing (for when something goes wrong now, or in the future).
So, with registry hacking, what about the big wall of text with all of those fancy numbers and what not?
A list of numbers without any immediate use. Source: ghacks.net
#0 Start with an empty registry with everything remaining untouched.
Note: You don't have to navigate here, or perform this step AT ALL while using the tool - there may as well be already something here... It doesn't really matter...
#1 Find and identify the drives that you want to remove
Being a triple-quadruple boot install, no drives other then the C:\ drive is actually needed to be visible...
#2 Visit the page
You'll be greeted with something like this...
#3 Select the drives that you want to hide
#4.5 Click on the "Download" link/button to download
"Keep" the file, and then open it...
I wasn't able to screenshot the following dialogue box in Windows 7, but here's what you have to do in Windows 8.1...
#4.5.1 Run the file when the security warning pops up
Windows 7 came with a nifty feature called AQS which allowed users to search files and folders based upon rules and expressions.
This functionality was extended to Windows 8 and Windows 8.1, where the syntax and concept have remained intact.
Compared to regular expressions, AQS have yet to realise its "Advanced" claim, and should really fashion itself as the "Basic Query Syntax (BQS)".
Consider this scenario. I have a list of archives that I want to search through on my computer.
These archives have a specific naming structure, 01-01-01 Archive.rar, 02-01-01 Archive.rar, 03-01-01 Archive.rar... Basically following the pattern of "DD-MM-YY Name.rar"
With "AQS", you're allowed to use the AND and OR operators, where to achieve searching the said files, you could hypothetically do something like:
(0 or 1 or 2 or 3 or 4 or 5 or 6 or 7 or 8 or 9) and (0 or 1 or 2 or 3 or 4 or 5 or 6 or 7 or 8 or 9) * type:.rar
Well.. This worked in Windows 7. But, since Windows 8.1 is the trend now, I updated my computer and found out that it no longer worked.
The question to be asked. Is why is it so complicated to search numbers in the first place?
(0-9) and (0-9) * type:.rar
[0-9] and [0-9] * type:.rar
title:(0-9) and (0-9) * type:.rar
title:[0-9] and [0-9] * type:.rar
(0-9)(0-9) * type:.rar
[0-9][0-9] * type:.rar
(0-100) * type:.rar
[0-100] * type:.rar
title:(0-100) * type:.rar
title:[0-100] * type:.rar
All do not work...
With regular expressions, the following works fine:
However, the same regular expression fails completely in Windows 8.1...
The best solution that I could come up to solve my problem was to use the following AQS query:
("00" OR "01" OR "02" OR "03" OR "04" OR "05" OR "06" OR "07" OR "08" OR "09" OR "10" OR "11" OR "12" OR "13" OR "14" OR "15" OR "16" OR "17" OR "18" OR "19" OR "20" OR "21" OR "22" OR "23" OR "24" OR "25" OR "26" OR "27" OR "28" OR "29" OR "30" OR "31")
Though, I guess when it comes down to it, this is just proof that the "Advanced Query Syntax" format is simply not powerful enough.