So, you just learnt how to control a buzzer using an Arduino, but you want more then just simple beeps and hums - you want to assert your dominance over the sound-waves to show-off how well you can control the buzzer.
Cynical? Well there's no denying that it's a sentiment that we all share once we learn something new, but hey, if you've managed to acquire and get an Arduino working, then that's quite far an accomplishment.
A quick search of "music buzzer arduino" should point youtotherightdirection, but no-one has the time to individually hand-code and map the midi-tones to the frequency values that the Arduino's Tone function requires, especially if you're working with a long MIDI track.
Wiring the Buzzer is trivial, just place the buzzer on two strips (on a breadboard), connect one strip to Pin 11 on your Arduino, and the other strip to the Ground.
^ TOC
Oh, and by the way, it appears that you can leave the Piezo buzzer running on loop for an extensive amount of time (24 hours+), so if you want to play a midi indefinitely, you can use an Arduino to do so.
Unfortunately, you can only have one tone running at a time, so if your midi track has multiple keys being played simultaneously, expect some wacky results. (here - try this: http://www.forelise.com/midi - "Track 2: Acoustic Grand Piano - Piano - Fr Elise")
If you attempt some protothreads hack, then expect only one Piezo buzzer to work (at a time).
But if you're game, and want to give using protothreads a try:
For a bit of context, I "backed up" my LG QuickMemos from the "storage/emulated/0/Android/data/com.lge.qmemoplus" folder (friendly reminder to backup your LG phone with the official LG Bridge app), and started to delete some fairly important notes - or at least, notes that were important - since I had full faith in the back-up safekeeping any residue important information (so much so, in fact, that I cleared my clip tray, and emptied out the inbuilt QuickMemo trash).
Well, it turned out that the folder at "storage/emulated/0/Android/data/com.lge.qmemoplus" only contains the following: "Audios, Drawings, Images and Videos" - not the memos itself. And so, with a little bit of digging around the root directory, the actual path of the QuickMemo memo database was found.
Note that you may need root to access this folder (where rooting a LG phone is trivial).
Edit (October 2016)
Root is required to easily access the database file at the specified path.
So either chmod as root (to open the quickmemo database files for use without later root),
or,
su
cp /data/data/com.lge.qmemoplus/databases/qmemoplus.db /storage/emulated/0
An alternative method could be through recovering the LG (G4) Quickmemo+ file from a real backup, although it appears that the backup encrypts, or otherwise hides the database file.
So what happens if you purchase an off-brand "tablet" device, and an incompatible "tempered screen protector"?
Well, you should throw the screen protector away, or at least donate it to a friend that you know has a device which the screen protector can fit on - because unless you have specialized machinery, cutting tempered glass with a pair of scissors, saw, or anything similar will simply break the screen protector.
To clarify, when you cut a Tempered Glass screen protector, the glass will immediately shatter at where you cut it.
So unless you've actually purchased a "fake tempered glass screen protector" - as soon as you apply pressure on the pair of scissors, the Tempered Glass will break, rendering it unusable...
N.B. I purchased two of these screen protectors since they were on sale, one for curiosity purposes (of course using anything outside of its intention has a high chance of destroying it), and the other for an actual iPad.
Apparently, one of the hardest video-editing task to do with a script is to create a dynamically-timed slideshow without any fancy-drag and drop GUIs.
With Adobe After Effects, you cannot dynamically load external images using an expression (they will need to be loaded into your project beforehand, and even then, you cannot load the image into a comp with an expression).
And adding hundreds of layers of images and having to go through each and every one of them to edit the expression is a fairly tedious task.
Worse of all, every change that you make - such as adding a new image to the slideshow - will compound towards the chore of doing things manually.
With "Python: PIL to mp4", a simple blending transition was created using PIL and OpenCV, But the objective of this post is to introduce timings to delay the animation for numerous/multiple images.
We can extend this idea of having a primitive transition to allow for an image to be delayed from transitioning until a certain amount of time has elapsed, and to allow the transition to occur after "x" amount of seconds, hence forming a slideshow.
Process
Initialization
So to start off with, we're going to need some data to work with.
Since it's Python, you can do whatever you want to feed data in - you could use a JSON file, CSV, Pickle, whatever you're comfortable with, or perhaps, whatever arbitrary file format that you're locked into using.
But here, a basic python array will be used to indicate the timings and image file that will be fed into the slideshow, amongst other data...
As you can see in the data above, the most relevant data is songData[][0] and songData[][4], indicating the timings (in seconds) and the image file locations, respectively.
We're going to set the FPS of the slideshow... 60FPS is the standard nowadays, so we're going to set that and process the songData above to reflect this...
FPS = 60 # Sets the FPS of the entire video
currentFrame = 0 # The animation hasn't moved yet, so we're going to leave it as zero
startFrame = 0 # The animation of the "next" image starts at "startFrame", at most
trailingSeconds = 5 # Sets the amount of time we give our last image (in seconds)
blendingDuration = 3.0 # Sets the amount of time that each transition should last for
# This could be more dynamic, but for now, a constant transition period is chosen
blendingStart = 10 # Sets the time in which the image starts blending before songFile
for i in songData:
i[0] = i[0] * FPS # Makes it so that iterating frame-by-frame will result in properly timed slideshows
Now the first image is going to be loaded in by the script - as so:
im1 = Image.open(songData[-1][4]) # Load the image in
im2 = im1 # Define a second image to force a global variable to be created
current = songData[-1][4] # We're going to let the script know the location of the current image's location
previous = current # And this is to force/declare a global variable
And next up is to create the actual OpenCV video handling capability. You can have a read-up about this here: Python: PIL to mp4
height, width, layers = np.array(im1).shape # Get some stats on the image file to create the video with
video = cv2.VideoWriter("slideshow.avi",-1,60,(width,height),True)
So that was the basic initialization routine. If you don't get how it works together yet, don't worry. Just read on - as the full code with everything combined is below.
Main loop
So the strategy behind generating this slideshow is to loop through each and every frame and continuously feed that into our output video file. Sure some corners can be cut - by which you only generate the transitions (leaving the gaps to be manually filled by an external program) - but this post is looking more into automating the entire slideshow generation process with only Python, PIL and OpenCV.
We're going to have a main while loop that sets the limit on how long our slideshow should last.
while currentFrame < songData[0][0] + FPS * 60 * trailingSeconds: # RHS defines the limit of the slideshow
And this is where the nitty gritty kicks in: the actual code that makes the transition between each image within the slideshow...
for i in songData: # Loop through each image timing
if currentFrame >= i[0] - (blendingStart * FPS): # If the image timing happens to be for the
# current image, the continue on...
# (Notice how songData is reversed)
# The print statement adds some verbosity to the program
print str(currentFrame) + " - " + str(i[0] - (blendingStart * FPS)) + " - " + i[2]
if not current == i[4]: # Check if the image file has changed
previous = current # We'd want the transition to start if the file has changed
current = i[4]
startFrame = i[0] - (blendingStart * FPS)
# The two images in question for the blending is loaded in
im1 = Image.open(previous)
im2 = Image.open(current)
break
# See: http://blog.extramaster.net/2015/07/python-pil-to-mp4.html for the part below
diff = Image.blend(im1, im2, min(1.0, (currentFrame - startFrame) / float(FPS) / blendingDuration))
video.write(cv2.cvtColor(np.array(diff), cv2.COLOR_RGB2BGR))
currentFrame += 1 # Next frame
The ending to this program is pretty self-explanatory...
# At this point, we'll assume that the slideshow has completed generating, and we want to close everything off to prevent a corrupted output.
video.release()
All together now!
So here's all the code required to create a timed image slideshow with PIL and OpenCV v2!
Code:
from PIL import Image
import cv2
import numpy as np
songData = [
[390, u'Fractal', u'Itvara', 'minimix', u'image1.jpg'],
[322, u'Case & Point', u'Error Code', 'minimix', u'image2.jpg'],
[261, u'Excision & Pegboard Nerds', u'Bring the Madness (Noisestorm Remix) [feat. Mayor Apeshit]', 'minimix', u'image3.jpg'],
[157, u'Nitro Fun', u'Final Boss', 'minimix', u'image4.jpg'],
[88, u'Astronaut', u'Quantum (Virtual Riot Remix)', 'minimix', u'image5.jpg'],
[0, u'Fractal', u'Contact', 'minimix', u'image6.jpg']]
FPS = 60 # Sets the FPS of the entire video
currentFrame = 0 # The animation hasn't moved yet, so we're going to leave it as zero
startFrame = 0 # The animation of the "next" image starts at "startFrame", at most
trailingSeconds = 5 # Sets the amount of time we give our last image (in seconds)
blendingDuration = 3.0 # Sets the amount of time that each transition should last for
# This could be more dynamic, but for now, a constant transition period is chosen
blendingStart = 10 # Sets the time in which the image starts blending before songFile
for i in songData:
i[0] = i[0] * FPS # Makes it so that iterating frame-by-frame will result in properly timed slideshows
im1 = Image.open(songData[-1][4]) # Load the image in
im2 = im1 # Define a second image to force a global variable to be created
current = songData[-1][4] # We're going to let the script know the location of the current image's location
previous = current # And this is to force/declare a global variable
height, width, layers = np.array(im1).shape # Get some stats on the image file to create the video with
video = cv2.VideoWriter("slideshow.avi",-1,60,(width,height),True)
while currentFrame < songData[0][0] + FPS * 60 * trailingSeconds: # RHS defines the limit of the slideshow
for i in songData: # Loop through each image timing
if currentFrame >= i[0] - (blendingStart * FPS): # If the image timing happens to be for the
# current image, the continue on...
# (Notice how songData is reversed)
# The print statement adds some verbosity to the program
print str(currentFrame) + " - " + str(i[0] - (blendingStart * FPS)) + " - " + i[2]
if not current == i[4]: # Check if the image file has changed
previous = current # We'd want the transition to start if the file has changed
current = i[4]
startFrame = i[0] - (blendingStart * FPS)
# The two images in question for the blending is loaded in
im1 = Image.open(previous)
im2 = Image.open(current)
break
# See: http://blog.extramaster.net/2015/07/python-pil-to-mp4.html for the part below
diff = Image.blend(im1, im2, min(1.0, (currentFrame - startFrame) / float(FPS) / blendingDuration))
video.write(cv2.cvtColor(np.array(diff), cv2.COLOR_RGB2BGR))
currentFrame += 1 # Next frame
# At this point, we'll assume that the slideshow has completed generating, and we want to close everything off to prevent a corrupted output.
video.release()
Sample output
So with all the code above, it begs the question, why do I need to create a slideshow using scripts?
Well, here's a little sample of what you can do with a simple little slideshow.
Note the timings from "songData",