Live support during the service period
m.objects for Windows and macOS
The m.objects project directory
Exchange a dongle for an activation code
Rent an m.objects license for a limited period of time
Special features of the m.objects application under macOS
Inserting media files into a show
Extended desktop under Windows 7, 8 and 10
12 steps to a live presentation
Inserting music into the soundtrack
Live commentary with waiting marks
Remote control as a comfort turbo
The m.objects program interface
Adjust the display size of the interface
Set up the interface individually
Automatically show and hide dockable windows
Save and load individual window layouts
Automatic scrolling in picture and sound tracks
The timeline with time ruler and tracks
Naming tracks and displaying object properties
The color coding of the light curves
General information on objects
Paste macros and other content from the clipboard
Automatic extension of the object selection
Any groups of curve handles in the clipboard or in macros
Horizontal and vertical movement of objects and handles
Working with the arrow controls
Multi-editing for all object types
New show - a project in m.objects
Intelligent insertion of media
Calling up the Finder or Explorer from the Timeline
Track assignment of newly inserted media
Whereabouts of media deleted from the timeline
Duplicate search and duplicate filter
Evaluation of the sensor position
Inserting images via Windows Explorer
Insert images via the lightbox
Keyword management - the lightbox becomes a storyboard
Image properties - the Edit imagewindow
Intelligent adjustment of the aspect ratio
Creating texts with the title editor
Create titles with external image editing software
Color management and calibration
Quality characteristics of image files
Things to know about file formats
Export the content of the canvas as a single image
Paste image content directly from the clipboard
General information on dynamic objects
Speed/pitch - dynamic slow motion / time lapse and time stretching
The new audio engine as of m.objects version 8
Quick change of the output for the sound
Targeted driver assignment for individual audio tracks
Inserting audio files into a show
Recording sound with a microphone
Move audio excerpt with the mouse
Sound effects and global dynamic settings
Hardware support for video decoding
Inserting videos into the m.objects Show
Smoothing the playback of video clips with an unsuitable frame rate
Select audio stream from a video
Store edited video clips on the lightbox
In/out times - manual input and SMPTE time code
Move video section with the mouse
Frame-accurate and lossless video trimming
Applying objects and masks to videos
Cutting media at the locator position
Color grading for images and videos
Color grading with image/video processing
Global Color Grading (Post Processing)
Color grading with lookup tables
Wizard: Synchronize images to time stamps
Wizard: Align fade-ins / fade-outs
Assistant: Compress/stretch or standardize timing
Assistant: Animation (Ken Burns)
Wizard: Separate video sound to audio track
Wizard: Trim (shorten) video files without loss
Wizard: Stabilize or reverse video files
Wizard: Insert wait marks and adjust timing
Assistant: Autoshow, multiple copy of objects
Presets for stereoscopic presentations
Inserting images and video sequences
Use of the 3D object with image field and zoom object
Camera movement through a picture
Real-time rendering from m.objects
Real-time rendering with presentation file (EXE)
EXE file with videos in the presentation directory
Export a predefined area of the timeline as an EXE
Cutting and selecting an export area
Video series export of several export areas
Multiscreen with different content
One screen on multiple output devices
Warp setup for curved surfaces
Manual ducking for spontaneous moderation
Speaker preview for live lectures
View for lecture time and time
Index marks with real-time trigger
Index/skip marks, area marks and interactive image fields
Visual feedback of interactive image fields
Control functions of interactive image fields
General specifications for video export
Evaluation of the final result
Remote - extended control options
Calling up external programs and files from the timeline
Remote control of digital projectors
Lighting control / DMX control
m.objects is an extremely flexible and powerful software for creating audiovisual productions (AV shows) and for their playback and export. The range of applications extends from the classic photo show with sound, to effective arrangements of images and videos, to the control of highly complex AV systems consisting of a large number of projectors and other peripheral devices and using numerous sound channels.
m.objects is completely graphically oriented and can be operated intuitively thanks to its clearly structured interface. Both the arrangement of still images (photos and graphics) and videos as well as the complete soundtrack and the control of other devices can be defined directly in the program interface using the mouse.
Tools for image, video and sound editing and for organizing the media files used are available within m.objects. This ranges from integrated image and video editing and video editing to recording functions for analog and digital sound sources and the use of sound effects.
m.objects consistently utilizes two important principles:
The arrangement and the media used are processed non-destructively, i.e. without any loss of quality in the original material. All editing steps can be undone or redone in a modified form at any time. The source files such as photos, videos or sound recordings remain completely untouched.
The playback of a production can be started at any time and at any point and immediately delivers full picture and sound quality. Edits that have just been made are immediately audible and visible, without the need for lengthy and quality-reducing calculation processes (rendering). You therefore know the final result of each edit immediately. Since the principle of immediate availability in full quality is extended to all media in m.objects, the termreal-time rendering is used for this.
Video training courses on them.objects YouTube channel offer a thorough and easy-to-follow introduction to working with the software and also demonstrate advanced working techniques using numerous practical examples.
Thanks to state-of-the-art software technology, careful optimization and the use of the capabilities of the available computer hardware, m.objects achieves first-class output quality. Its main strengths are
- Practicallyno software restrictions on input and output resolution: high definition even above the UHD or 4K standard per output device, optional use of multiple output devices to further increase the overall resolution
-
Abilityto display colors correctly:
Full ICC color
management, correct handling of calibrated output devices such as
monitors, digital projectors
- Processing ofpractically all common audio and video file formats with the best possible integration of the available hardware capacities (graphics card-based video decoding)
-
Smoothdynamic effects such as transitions, zoom and camera
movements, rotation and more:
Real-time output of progressive
single images with a constantly high refresh rate synchronized to the
output device
-
lowestpossible demands on the computer hardware:
largely
automatic adaptation of the program to available hardware by using
the latest software technology
Despite the high degree of optimization of m.objects, a certain minimum configuration is necessary for optimal processing of complex arrangements. You will find more detailed information on this in the System requirements chapter.
With m.objects you can produce and present multivision shows at a professional level. You can make optimum use of the quality of your high-resolution image material and achieve color-accurate playback using color management. m.objects ensures practically smooth motion sequences even with a large number of image tracks, including cross-fade effects, camera movements through images, zooming into the smallest details, rotations and 3D animations. Prior resizing or sharpening of photos is not necessary, or even unfavourable in many cases. You can integrate the image material directly into the program. Apart from the manufacturer and camera-specific RAW files, m.objects can read and process all common image formats.
m.objects also supports the integration of high-resolution videos. You can arrange sound effects on the audio tracks in exactly the way that best suits the flow of your presentation. Music tracks can be easily imported into the program from CDs and other sources in top quality, and spoken commentaries can be recorded directly via m.objects. With just a few mouse clicks, you can achieve the optimum mix of the sound, so that over- or under-mixing is avoided without any effort.
m.objects also offers you all the possibilities on the output side. Seamless large-screen presentations in immensely high resolution with automatic distribution of the image signal to a large number of digital projectors are also possible.
m.objects combines all of this with an intuitive program interface in which professionals can work efficiently and beginners can quickly find their way around. Important image editing functions such as brightness control, contrast adjustment, tonal value correction, sharpening or blurring are integrated into the program, as are frequently used filter effects. The same applies to video and sound editing. Videos and sound samples can be cut and edited in m.objects. So instead of switching back and forth between different applications, you can work quickly and conveniently in one program.
m.objects does not claim to completely replace specialized software solutions for image, sound or video editing. For special applications such as complex cropping, elaborate video editing or sophisticated sound processing, appropriate specialist software can be used and the result integrated into the m.objects presentation.
You should definitely allow yourself a certain amount of time to familiarize yourself with the software, despite its intuitive operation. The first steps in the program are quickly made and short shows can be created without much effort, but some experience is required before you can create a feature-length presentation that will captivate your audience. But then the possibilities for creative work are almost unlimited.
The applications with m.objects are as varied as the possibilities: On a smaller scale, presentations can be shown on a computer monitor or TV screen.
Presentations using a digital projector can be even more interesting, as good devices deliver excellent sharpness of detail, brilliance and brightness. Here too, the range of applications is very wide, from presentations in private settings on smaller screens to lectures in large halls on correspondingly large screens, such as those given by lecturers with mobile presentation equipment.
There are also fixed installations with a large number of projectors, which are used to show impressive panoramic shows.
On the m.objects website you will find a number of examples of installations and live presentations with m.objects in the references section:
https://www.mobjects.com/ueber-m-objects/referenzen/
We offer comprehensive support for your m.objects AV software. This also includes free remote diagnosis or a solution to the problem via PC remote control (TeamViewer) within the service period (period for free updates, see form under Settings / Activation) for questions that cannot be answered simply by the manual or by telephone. You can find the required module for this directly in the Help menu under Live Support.
The time for remote control support should of course be agreed in advance by telephone or e-mail.
As of version X, m.objects is available for use under Windows as well as directly under macOS. While previous versions required the installation of an additional partition with Bootcamp or the installation of a virtual machine for use on Apple computers, this detour is no longer necessary with m.objects X. This means that production and presentation in the usual m.objects quality is possible directly on both systems. X stands for 'cross platform'. On the one hand, this means that m.objects productions are fully compatible between Windows and Apple computers. An m.objects show that you have created on a Windows computer can therefore be edited on a Mac without any conversion and vice versa. On the other hand, every current m.objects license is suitable for use on both Windows and macOS. This means that you do not need a special license for either platform. In addition, future developments of the software will be made simultaneously for Windows and macOS. Newer program versions will always be adapted to the current operating systems and requirements.
You can download the current m.objects version at
www.mobjects.com/downloadcenter
where you will find a link to the Windows and macOS installation.
There are detailed instructions for installation under macOS.
Under Windows, double-click the file after downloading it. The start screen of the setup wizard appears first. Click Next and select the installation type in the following window.
If you click on Complete, in addition to the actual program, sample files are also installed which demonstrate various program functions and which are also referred to here in the manual. If you do not want to install m.objects in the standard directory, select the Custom option instead. If you selectStandard, the program will be installed without sample files.
If m.objects is already installed and you start exactly the same setup again, the Repair option appears at this point, with which you can install the program again over the existing version.
Then click onInstall to start the actual installation.
Once the installation is complete, confirm by clicking on Finish to close the installation wizard.
After installation, a browser window opens with information on the current version.
After installation, you will find them.objects Data directory in the Windows File Explorer under Documents or in the macOS Finder under Macintosh HDUser Shared. Here the program saves all projects that you create with m.objects in theShow folder. The MixDown folder is used for exporting from m.objects. So if you export a show as a video, the video file is saved here. If the MixDown folder does not yet exist, m.objects will create it automatically if required. The specifications for the project directory can be found under SettingsProgram settings and here again under Paths and applications.
Start m.objects by double-clicking on the icon on the desktop. If you do not yet have a license, you can run the program free of charge either as freeware or as a full version (demo).
The freeware provides you with two image tracks and an audio track for simple image sequences with background sound. However, the range of functions of the freeware is significantly limited compared to the licensed versions, as is the export of presentations and the suitability for sophisticated live presentations.
Click on Full version (demo) to test the full range of m.objects functions. Almost all the functions of the m.objects creative expansion stage are then available. Content can be arranged on any number of image and sound tracks. However, the maximum duration of a production created in this mode is limited to 2 minutes.
If you have not yet set up a license, start m.objects, click on Activate license in the Select operating mode window and enter the license number and the installation code. Click OKto activate your license. In this way, you can activate your m.objects license on up to two computers.
This form of activation applies to all m.objects licenses, unless they are activated with a hardware USB dongle.
For the operation of m.objects under Windows, in addition to the form of activation described above, there is also the option of activating a license with a USB dongle, which must be connected to the computer when the program is started. Up to version 9.5, the m.objects licenses live, creative and ultimate as well as m.objects pro were always activated with a USB dongle. This type of activation is not available for the operation of m.objects under macOS.
If the Select operating mode window is displayed when you start the program despite the dongle being connected, you must first install the dongle driver. To do this, click on Activate license and in the following window on the Install hardware dongle button. If you have already started m.objects in demo mode, you will also find this button underActivation settings .
After clicking on Install hardware dongle, you will first be asked to remove the dongle from the USB port.
Confirm with Yes in the next window.
The driver installation will then start and the current status will be displayed in another window. As soon as the driver installation is complete, you will be prompted to reconnect the dongle.
Wait until a red light is illuminated in the dongle and then press any button to restart m.objects. Your license is now ready for use.
The USB dongle is compatible with all USB 1.1, USB 2.0 or USB 3.0 ports. Normally, it can also be connected to all types of downstream USB hubs without any problems. However, if there are any problems with the recognition of the dongle, please first try a USB port available directly on the computer instead of a hub.
To exchange a USB dongle for an activation code, please return the dongle to us by registered mail in a padded or reinforced envelope to the following address:
m.objects e.K.
Dahlweg 112
D - 48153
Münster
Note
on updating an existing program version
To make installation as easy as possible, it is strongly recommended that you do not uninstall the existing program before updating it. Install the new version over the existing one while retaining all settings. The installation program automatically replaces all files to be updated.
You can use the Help / Check for updates menu item to check directly whether your m.objects version is up-to-date or whether there are newer updates. The installed and available versions are displayed and it is shown whether it is a free update within your service period.
Click on the Show new features button to see a brief description of the latest functions and changes in m.objects. By clicking on Download setup you can download and install the latest version directly. A link to the online store is provided for possible chargeable updates.
The update check can be performed either automatically or manually. Enter the desired value under the Check for updates option. m.objects will then automatically notify you of available updates at the selected intervals. If you want to deactivate this message, only enter the value manually. Under no circumstances will m.objects carry out updates automatically, it will only indicate their availability!
If m.objects starts with the message The activation key is not valid for this program version, this is a version for which your purchased license is no longer valid. The expiration date for free updates can be found below the identification number.
It does not matter when you install an update. The message only appears if you install a version that was released well after the free update period has expired. If you are interested in an update in this case, please contact your AV dealer or the manufacturer. You can find more information on updates and prices at www.mobjects.com.
If you have already purchased a program update or upgrade, you will have received the required new activation code together with the delivery.
After entering the new code and pressing the OK button, the messageUnlocking key accepted appears.
The activation codes can be checked and changed at any time via the program menu Settings / Activation.
The free update period shown in this window is also the service period for your m.objects license. Free live support is available to you during this period. You can find out more about this in the chapter Live support during the service period.
In addition to the proven model of permanent licenses, it is also possible to rent an m.objects license for a limited period of time. This can be particularly interesting if m.objects is to be used for a specific project or customer order.
You can choose between different rental periods. You can choose between one, three and six-month rental periods. If desired, such a rental license can then be converted into a permanent m.objects license. A subscription model - as known from other photo applications - is not provided for m.objects.
The proven operating concept of m.objects does not differ between the Windows and Mac versions. So if you switch from the Windows application to macOS, you will immediately find your way around m.objects without any changes. The same applies to the image output in the m.objects canvas: The usual high and lossless output quality is also available under macOS.
Nevertheless, there are a few special features when using m.objects under macOS.
m.objects also supports the drag and drop of files under macOS. The easiest way to insert images, videos and sound files into a show is therefore to drag and drop them from the macOS Finder into the image and sound tracks or into the lightbox. However, if you do need to handle files directly from m.objects, the following applies: The drive identifier M: hides all the storage locations that you are familiar with from your Mac.
It is also recommended to copy the media files to the project directory (in the m.objects menu under File → Manage media files) using the file manager. The project directory can be found after the first program start in the Finder under Macintosh HD → Users → Shared → m.objects Data. You can change it to a different storage location at any time via the menu item "Preferences -> Program settings".
The digital CD audio option is not offered in the Mac version of m.objects under the Record / insert audio file function in the context menu (right-click) of the audio tracks. You can easily insert music from audio CDs into m.objects by selecting the CD drive in the Finder and dragging and dropping the desired track(s) from there into the audio tracks.
Exporting an m.objects show as a presentation file (EXE) is also possible under macOS. However, this is only useful for passing on to Windows users, as macOS does not support the EXE file format.
The format of choice for sharing with users of any computer system, for mobile devices such as smartphones and tablets or for publishing on online portals such as YouTube or Vimeo is the export of a video file.
m.objects also supports the video encodings H.264 and H.265 in the formats mov, mkv and mp4 under macOS. However, export in AVI and WMV formats is not possible under macOS. However, these formats are becoming less important anyway, as they are generally only used under Windows and are therefore much less flexible.
The m.objects canvas always appears as an independent window under macOS, i.e. it is not embedded in the program's desktop.
To display the canvas as a full screen, you have two options under macOS:
1. right-click on the canvas (as in Windows) and select the option (Full screen mode) from the context menu.
2. you use the Mac full screen function. This option is generally more practical for presentations with m.objects under macOS. This also prevents the menu bar or the dock from appearing in front of the canvas image. To do this, click on the green dot in the title bar of the canvas window to display it full screen. To exit full-screen mode again, position the mouse pointer at the top edge of the screen until the title bar appears and then click on the green dot again. This procedure is particularly practical if you are working with only one monitor.
In this mode, you can switch back and forth between the m.objects desktop and the canvas in full screen using the key combination Ctrl + right/left arrow key.
The windows of the m.objects desktop, such as the lightbox or the tool window, can be detached from the desktop by double-clicking on the title bar of the respective window. These are then displayed as a separate, floating window. To reattach them to the interface, right-click in the window, select the Window visibility option in the context menu and deactivate the option as a separate window.
The options current + next thumbnail, current thumbnail only (proxy) and next thumbnail only (proxy) are available under macOS for viewing the speaker preview. The options for integrating the live image are not yet available here.
The majority of program options are set by m.objects to sensible default settings during the initial installation. During an update of an existing installation, any parameters changed by the user are retained.
Basically, no further settings are required on your computer to edit and play back an m.objects production.
If you are working with two or more display devices, for example with two monitors, or with a laptop and connected TV set or digital projector, you should first set up the extended desktop under Windows. This gives you the option of viewing the m.objects desktop and the m.objects canvas separately on two devices. This allows you to view the timeline of your production on one device, while on the other you can see the result of your work in full screen and in high resolution. This makes working in the extended desktop very helpful right from the production phase.
However, this is also how most presentations are held in front of an audience: The speaker has a view of the timeline and, if necessary, comments or the speaker preview, he controls the course of the presentation by means of waiting marks and other functions of the so-called speaker support, while the audience simultaneously sees only the actual presentation, for example in a large projection.
The quickest and easiest way to activate the extended desktop is to press the key combination [Windows] + P on the keyboard. Then select the Expand mode.
If it should ever be necessary to make more differentiated settings (e.g. to swap the arrangement of the displays), you can use an appropriate form for this. In Windows 7, right-click on the empty desktop and selectScreen resolution in the context menu. In Windows 8, 10 and 11, the corresponding command in the context menu is called Display settings.
Windows 10
To provide both output devices with separate image content, select the Extend these displays option under Multiple displays and then check the resolution set for both output devices. The native resolution that should be selected can usually be recognized by the addition Recommended Under Windows 10, the Advanced display settingslink must be clicked for this.
Once you have made the required entries and confirmed them with the Apply button, the second output device will already receive an independent image signal. As a rule, you will initially only see the desktop background image. If you now move the mouse pointer to the right beyond the edge of the first screen, it will appear on the left edge of the second display. The usable desktop surface is now the sum of the two screen surfaces. In this mode, windows and icons can now be positioned anywhere on one of the two devices. Of course, this also applies to the m.objects canvas, i.e. the image output module of m.objects.
Different scaling of several display devices
As of Windows 8.1, you can scale several display devices differently, i.e. in addition to the actual resolution, you can also set the scaling of the display for each device. m.objects fully supports this function. Please note, however, that after changing the scaling factor for the primary display device, it is necessary to log out and log back in or restart Windows so that all elements are displayed correctly scaled.
Please also read the chapter Adjusting the display size of the interface.
The use of m.objects for complex media control as well as the integration with other media control systems and the control of other peripheral devices requires m.objects ultimate. Please refer to the documentation of the respective program module for more information on how to set this up.
The use of m.objects for projection with analog slide projectors is also still possible - even in combination with digital playback and complex media controls. However, as this area of application is only of comparatively minor importance, the setup of slide projector drivers has been outsourced to a separate documentation, which can be requested from the software manufacturer.
The following chapter shows you how to quickly and effectively create your own live audiovisual show from images and music in 12 basic steps. Please note that the m.objects tool window displays the tools that belong to the tracks you have just clicked on. You can also call up many of the functions described via the context menu by clicking with the right mouse button.
In some places, functions are used that are available from the m.objects live expansion stage. These are marked accordingly. If you do not have a license or are working with m.objects basic, skip these steps or test the functions in demo mode. You select this mode immediately after starting the program; basic users will find it in the program entry in the Windows Start menu.
There is also a video training course on this chapter on our website athttps://www.mobjects.com/service/videotrainings/
Presenting with m.objects is particularly convenient if you activate the extended desktop under Windows. This allows you to distribute the screen content to several output devices so that you can operate the m.objects desktop on the monitor of your PC or laptop while your audience follows the presentation on the TV screen or via a projector on the big screen. To activate the extended desktop, simply press the key combination Windows key + P and then select the option (1) Extendedor (2) Extend (Windows 10).
To prepare a new presentation, select the New Show command from the File menu and set up the m.objects desktop in just a few steps using the Project Wizard. Here you can either create a completely (1) new project directory or use an existing one (2) if you have already created a presentation on the same topic, for example.
You can then use the automatic configuration to specify the number of tracks, for example three (3) video tracks, (4) audio tracks and one (5) commentary track. You can easily change all these values later.
You can use the lightbox to select and sort the image material. Single or multiple images, as well as entire image directories, can be stored here directly from Windows Explorer. Alternatively, you can use the context menu. In the lightbox, drag one image onto another with the left mouse button to swap them. You can also place an image between two others by placing it on the bar between them. Press the Shift key to enlarge the image under the mouse pointer (1). By holding down the right mouse button and dragging, you can call up a preview with superimposition in full output resolution in the canvas window.
Now select all the images in the lightbox that you would like to insert into your show. To do this, either use the key combination ctrl/ctrl + A to select all images or drag the mouse along the bottom edge of the (1)desired images. Then move the selection to the image tracks and the distinctive(2) dark yellow light curves with the (3) corresponding image thumbnails will appear. Tip: By clicking on the letter A on the left-hand side, you can mute the top (4) image track beforehand so that it remains free. This way, you still have space to insert titles here later.
In the previous step, you have already created an initial image sequence with the specified values for the freeze and fade times and can now present it. To control your presentation, you will find the corresponding Play, Pause andStop buttons in the toolbar. It is even easier if you control the playback using the keyboard: Press the space bar (1)to switch between pause mode and playback. Using the arrow keys (2) right and (3) left, you can now navigate back and forth manually frame by frame. Press the (4) Esc button to return to stop mode.
First switch the (1)image track A back to active. Then drag the Text element object from the tool window onto this image track.
The title editor opens and here you select the (2)font and enter the text. You can use the (3)pipette to pick up a color from an image in the canvas and select it as the text color. Click OK to insert the (4) text into your presentation. To improve readability, drag a (5) shadow/appearance object from the tool window onto the text light curve so that it is underlaid with a shadow. This shadow can then be modified in the options of the shadow/appearance object.
For dubbing, m.objects can record signals from external sound sources (e.g. line-in input, microphone). Here, however, it should be sufficient to store an existing sound file on the sound tracks. To do this, select the Search / insert sound file command from the context menu. There you can cut the sound - right-click at the desired point, cut audio - and also adjust the fade-in and fade-out phase as required by moving the handles on the envelope. In addition, you can create further handles via the context menu in order to be able to vary the volume curve freely.
Each handle of a light curve can also be moved freely in the image tracks. To extend a fade phase, drag a (1) frame around the (2) bottom right handle of the first image and the (3)top left handle of the second image and then drag to the right by clicking on one of them. Tip: If you click with the right and left mouse buttons at the same time, you can select all the content to the right of the cursor on the image tracks and then move them back together. You can also adjust the timing of entire image sequences quickly and precisely using the timing assistant from the Edit menu.
Available live from m.objects
For variable live comments on individual images, it is advisable to use wait marks. Here, the presentation stops until you trigger the continuation.(1) You can place wait marks manually on the timeline or insert them using a wizard by selecting the relevant images beforehand. To avoid interrupting the background sound at a wait mark, you can define individual (2)sound passages as asynchronous. For spontaneous live commentary, use manual ducking: Press a key on the keyboard or remote control to lower the volume and raise it again later using the same key.
Available live from m.objects
You can control your live presentation even more conveniently than with a mouse or keyboard using a wireless remote control. m.objects supports all common models. The use of a remote control is the method of choice, especially in combination with holding markers: you can trigger the holding marker simply by pressing a button, no matter where you are in the room. m.objects offers the option of freely assigning all important control functions to the buttons on the remote control in the Settings menu. This allows you to save the operating profile that is best suited to your personal presentation style.
Available live from m.objects
Speaker support for live presentations offers further options: The speaker preview from the View menu shows you (1) the current live image of the screen, next to it or below it appears (2)the following image so that you can prepare for the next part of your story. The total elapsed time is displayed in the (3) Presentation timewindow. You can also use it in countdown mode (double-click in the presentation time window) to see how much of the previously set presentation time is left. You can also store (4) texts on the commentary track which you can use as a memory aid or script during the lecture.
Instead of presenting from the timeline, you can also save the (1) production as a compact executable file and present it. Such a presentation runs without loss of quality with almost all program functions on any Windows PC, even if m.objects is not installed. This is an advantage if you want to present on someone else's computer. To create a backup of your production or to transfer it to another PC, you can use themedia file manager from the File menu to copy all the files used to the project directory and transfer them to an external data carrier.
- Thescreen window is used for high-quality output of all visual content. It should therefore be switched to full screen mode during playback.
- Thecontext menu via the right mouse button is almost always a good tool, as it always contains a range of commands relevant to the current mouse position.
- Forall drag & drop operations (dragging and dropping with the mouse), the mouse cursor symbolizes whether the action is valid or not. For example, this prevents you from dropping a video whose length is greater than the available gap on the selected image track.
- Youcan select individual objects on the tracks in addition to those already selected or deselect them from an existing selection by holding down the ctrl/ctrl key while clicking on the objects or moving over them with the mouse button pressed.
- You can duplicate individualobjects or several objects in a group by moving them to the target point and pressing and holding the ctrl/ctrl key before releasing the left mouse button.
- Youcan edit similar objects such as photos, videos, audio passages, but also dynamic objects such as zoom, 3D animation and similar on the image tracks together. To do this, you must first select the objects, e.g. by circling the desired area with a lasso. It is irrelevant whether objects of a different type are also selected. Then call up Edit object for one of the objects to be edited (context menu) and make the desired changes. After confirming the corresponding form, the changes are applied selectively to the other similar objects in the selection.
- Youcan swap images between two light curves directly on the image tracks. To do this, click on the small gray square at the top left of the image thumbnail, hold down the mouse button and drag the image to another light curve and release.
- Viathe View / Media list menu, you can display and print a list form of the processes in your entire show. Muted tracks are suppressed here, and sorting according to various criteria is also possible, allowing you to extract the relevant information for different purposes. By clicking in the media list, the locator jumps directly to the corresponding position in the show.
- There isnothing in an m.objects show that cannot be changed later. For example, if you want to use more tracks for the sound than originally intended, simply modify the properties of the component: right-click within the existing tracks and select Edit component from the context menu.
- Youcan simply insert additional support points in light curves and volume envelopes and modify them as required. The context menu via the right mouse button is also the right way to do this. However, it is even quicker to double-click at the desired position.
- Youcan use practically any sound card to record sound from external sound sources into your PC, i.e. create samples for integration into m.objects. This can be sound from old tapes and records, for example, but of course also commentary directly from a microphone. You will find the required functions in the recording form, where you can also import music from a CD. In this case, however, switch to the external recording tab.
- You can trim thebeginning of a sample by moving the first two handles of the volume envelope together to the right. Similarly, you can shorten the sample at the end by dragging the rear handles to the left.
- If possible, deactivatepower management (energy-saving mode) and the screen saver on your PC or set both so that their activity cannot interfere with the playback of m.objects productions. Starting a screen saver can consume a large proportion of a PC's computing power and thus lead to disruptions in playback. Reducing the computing power using energy-saving functions can have the same effect.
1. command menu 2. toolbar 3. time display 4. tool window 5. light panel 6. canvas, reduced and docked 7. comment window 8. time ruler 9. image tracks 10. track designations / activation switches |
11. locator 12. audio tracks 13. commentary track 14. buttons quick change sound output, insert,scale display and show/hide 15. display of the presentation time 16. display of the time 17. audio status window 18. status bar |
The active component of the m.objects program interface is displayed with a colored frame. In the image, the Projection component - i.e. the image tracks - is active, as indicated by the blue L-shaped frame. The tools associated with the active component are displayed in the tool window.
To ensure that m.objects remains easy to use on particularly high-resolution monitors, i.e. the fonts and symbols do not become too small, the desktop can be scaled as required.
By default, m.objects initially uses the value that is set in the operating system for scaling the fonts. However, you can adjust the magnification factor for m.objects individually. Under Settings / Program settings, tick the option Manual scaling of the desktop and move the slider to the desired value.
Depending on your requirements, you can increase the scaling value for relaxed working or improved visibility for multiple viewers, or reduce it for a better overview when positioned close to the monitor. All editor controls, menus, toolbar, window elements, forms and messages are adjusted immediately. It is not necessary to restart the program.
The individual components of the m.objects desktop can be moved and rearranged as required. This allows you to create exactly the environment that makes working with m.objects the most comfortable for you.
For many windows in the program interface, you will see a double bar that can be grabbed and dragged to move the window from its current position to another location and dock it back onto the desktop. If you grab the frame of the window next to this double bar, it can also be moved, but will not dock at the new position. Double-click on the frame to detach a window from the desktop so that it is displayed as a floating window. Double-click the frame again to return the window to its previous, docked position. This means that you always have the option of positioning the individual windows separately so that they can be moved freely at any time, or docking them at a specific point on the program interface, where they are then firmly anchored on the one hand, but remain scalable in their width or height on the other.
You can also change the arrangement of the components and, for example, place the picture tracks under the sound tracks. To do this, simply drag the bracket with the label (e.g. Projection or Digital Audio) up or down. You can restore the original arrangement in the same way.
All dockable windows, such as the tool window, the lightbox or the speaker preview, can be shown and hidden automatically in editing and presentation mode. To do this, right-click in the desired window and select Visibility of window in the context menu at the bottom.
Here you can specify whether a window isalways displayed, only displayed when the show is being edited (in stop mode) or only displayed when the show is being played (in pause and play mode). The never show option closes the window. This allows you to set up the window layout of your m.objects workspace so that the tool window is visible when you are editing your presentation in stop mode, for example. However, as soon as you switch to pause or play mode for playback, the tool window is hidden and the speaker preview, comment window or any other window is displayed in the same place instead. This saves space and makes the interface clearer, but above all you always have exactly the controls and windows that you need in the respective situation in front of you.
You can easily save your individual window layouts for the desktop under the menu item File / Save window layout and call them up again at any time via File / Load window layout to switch between the window layouts.
You set up the appropriate layouts for different purposes, i.e. arrange the individual components of the interface in a way that is particularly useful for the work you are currently doing on your Multivision and save them under a corresponding name. You decide, for example, whether the toolbar is positioned above the image and sound tracks or instead arranged vertically next to the tracks, whether the tool window is arranged on the right or left, horizontally or vertically. For example, you can create a layout specifically for presentations and others for video or sound processing and use them as required. The automatic fading in and out of dockable windows (see above) is also saved in the window layout.
If you want to change the number of tracks, for example the image, sound or commentary tracks, double-click on the bar below the respective component. On this bar you will also find the name of the component, for example Projectionor Digital Audio. Enter the desired number of tracks in the window that then appears.
Depending on the configuration level, m.objects provides you with different numbers of image and sound tracks. m.objects creative or ultimate users can enter any number of values here.
By default, m.objects inserts new tracks below the existing ones or deletes tracks from below if the number of tracks is reduced. With the Insert tracks above or Remove tracks aboveoption, you can insert new tracks above the existing tracks or delete the tracks from above. Please note that when you delete a track, you also delete the objects it contains. Of course, this does not delete any original files, but only the objects and their properties on the track in question.
If you want to insert an object into a track, for example an image into an image track that is not currently in the visible area of the desktop, simply drag the new object to the top or bottom edge of the screen. m.objects now automatically scrolls up or down and you can place the object in the desired position.
If you have saved a show accordingly, m.objects automatically scrolls to the last position displayed when you reopen this show. In this way, the program visualizes that there are other tracks outside the currently visible area.
You can also save a customized m.objects desktop as a configuration. You can find the corresponding entry in the menu under File / Save configuration as.
In contrast to saving a window layout, a configuration is not used to change the layout while working with the program, but rather as a template for new shows. For this reason, the number of tracks, macros, resolution and aspect ratio of the m.objects canvas are also saved in a configuration.
If you then open the project wizard under File / New show and select the optionUse existing configuration as basis, a list of your individual configurations will be available in addition to some predefined standard entries.
The timeline is the central element within m.objects. It contains the time ruler and the image and sound tracks as well as other tracks if necessary. The timeline is used to control all time sequences: the order of the images, the duration of the images and sound samples, fade-in and fade-out times, the duration of fades, zooms, rotations and other effects, i.e. ultimately the duration of the entire show.
In order to set up the time sequence in an m.objects presentation as precisely as possible, it can be helpful to enlarge the display of the desktop:
Use the magnifying glass with the plus sign in the toolbar to widen the timeline and tracks, so that the time intervals are visually extended. This makes it much easier to place objects precisely. Alternatively, you can also achieve this effect using the plus key on the keyboard. Conversely, using the magnifying glass symbol with the minus sign or the minus key makes the display narrower again, resulting in a better overall view.
The second option for a more detailed display is the double arrow under the tracks.
Hold down the mouse button and move this double arrow up or down to enlarge or reduce the respective tracks.
The changes to the display size naturally only affect the editing within m.objects. This does not change the display on the canvas.
You can mute individual tracks in m.objects, i.e. deactivate them so that the contents of these tracks are not taken into account during playback or when inserting or moving media. This can be particularly helpful during the production of an m.objects show.
To do this, click on the symbol for the track name at the very beginning. If it is crossed out, the track is muted. To reactivate it, click on the symbol again so that it is no longer crossed out.
Right-click on the icon to deactivate or activate all tracks except for the selected one.
The time ruler shows the exact time position within the show on a scale. Depending on the magnification selected with the magnifying glass symbol, accuracy down to the millisecond range is possible here.
When scrolling vertically in projects with many tracks, the time ruler remains in its position if it is arranged above the tracks. However, you can also move it below the tracks or, for example, between the video and audio tracks by moving it with the mouse. In this case, the time ruler is moved along with the vertical scrolling.
The wider the light curve of an image, the longer its stand time, the more time passes between the image fading in and out. If you want to shorten the stand time, push the light curve together and pull it apart to extend the stand time. To do this, use the mouse to mark the upper and lower handles of the fade-in or fade-out, then hold down the left mouse button, grab one of the marked handles and drag the mouse pointer in the desired direction.
To change the fade-in or fade-out itself, simply grab the lower handle and pull it towards the light curve to shorten the time or away from the light curve to lengthen the time. Proceed in the same way with the sound samples.
The image and sound tracks (and possibly others such as commentary tracks) are therefore always directly related to the time ruler.
To navigate through the timeline, you can use the scroll bars at the bottom or right edge. However, it is more convenient to use the scroll wheel of the mouse or the right mouse button: By turning the scroll wheel, you can move step by step to the right and left. If you press and hold the right mouse button over the tracks instead, you can drag the tracks continuously to the right and left as well as up and down.
To get to the very beginning of the timeline, simply press the [Pos1] key on your keyboard. Pressing the [End] key, on the other hand, takes you to the end of the timeline, i.e. to the last object stored on the tracks or on the time ruler.
Use the right and left arrow keys to move the locator, i.e. the playback head of m.objects, step by step by 20ms (i.e. 1/50 second) in the corresponding direction. However, if the locator is within a video, it moves forwards or backwards by exactly one video frame at a time using the arrow keys, which takes more or less time depending on the frame rate of the video.
If you have selected an object on an image track, the locator moves to the next or previous object on the same image track by holding down the ctrl/ctrl key and using the arrow keys.
In addition to the alphabetical or numerical names of the image and sound tracks, which you will find in the frame to the left of the tracks, you can also give individual tracks an individual name to describe their use, for example. For example, image track B can be given the name Masks if you want to store masks on this track. Right-click in the track and select the Edit track option in the context menu.
Enter the desired name in the following form and confirm with OK. If you now move the mouse over an empty area of this track while holding down the Shift key, this name will appear next to the mouse pointer. This procedure can also be applied in the same way to the audio tracks and, if used, to the tracks of other components.
If you drag the mouse pointer to the track name of an image or sound track on the left-hand side, m.objects automatically displays an info window. In addition to the track name, you will find further information such as the currently active sound card, sound attenuation or the use of auto-ducking.
If you move the mouse pointer over an object on a track while holding down the Shift key, the most important parameters set for this object are displayed.
For example, you can display the zoom factor and the position of the zoom center of a zoom object without having to open the properties window for this object. This also applies to the properties of all other dynamic objects, for the locator, for all curve handles and for objects on the time ruler.
If you insert an image into a track in the m.objects workspace, it will normally first appear in a dark yellow light curve. This means that the image mixing is setin additive mode .If, on the other hand, the light curve appears in green, theoverlapping mode has been selected. This is the case, for example, if you are working with an image field object. You can read more about this topic in the Image blending chapter.
If an image is used as a mask, m.objects displays the light curve in gray. Further information on this topic can be found in theMasks chapter.
You will find the status bar at the bottom of the m.objects desktop. If it does not appear there, select View / Status bar to show it.
You can see information about the textures on the left-hand side of the status bar. When loading a show, the number of textures still to be loaded or calculated is displayed there. Once this process is complete, the message Textures completed appears.
On the right-hand side, you will find the Undo and Redo indications, which show you how many steps you can go backwards or forwards again. On the left-hand side, you will see the exact time for the position of the locator on the time ruler.
As soon as you switch to pause or play mode, the Undo and Redo information disappears and the screen refresh rate in fps (frames per second) appears instead.
This allows you to check whether your system is playing a show at the desired constant frame rate.
All elements displayed in raised form on the tracks within m.objects are objects that can be edited directly with the mouse. In the case of the image tracks, for example, these are the handles at the corners of the light curves. Most of the work on a production consists of moving the objects horizontally in order to synchronize them.
To be able to move objects, they must first be selected. In the simplest case, the selection is made by positioning the mouse pointer over an object and a simple mouse click. A selected object is displayed dark, while unselected objects are light gray. The duration of fade-ins and fade-outs, for example, can be changed in isolation by moving them with the mouse button held down. The neighbouring image stand times on the respective track change accordingly, provided that these objects are not selected and therefore moved at the same time.
There are various techniques for selecting multiple objects:
Once an object has been selected, others can be added to the selection by left-clicking while holding down the ctrl key.
Dragging a frame selects all objects in this area. A frame can be drawn by left-clicking next to an object and dragging the mouse while holding down the mouse button.
Commands for selecting all objects to the right, left or both sides of the mouse pointer are available via the context menu in the free area of the tracks (right mouse button). The selection can be limited to the current track or the current component (e.g. image tracks) or include all components.
Clicking with the left and right mouse buttons simultaneously selects all objects from the current mouse position in the current component. This technique is probably one of the most frequently used, as it is very practical for synchronizing image and sound, among other things.
After selecting several objects, only one of them needs to be moved in order to move them all at the same time.
For all types of selection of an area on the timeline - i.e. for the lasso function as well as for the selection of all objects to the left or right of the mouse pointer - you can restrict the selection to a specific object type by holding down the Alt key.
Either the last timeline object clicked on or the last tool selected in the tool window determines the object type to be selected. For example, to remove or move all dynamic shadow objects in an area, first click on the Shadow/Shinetool in the tool window or on a light curve and then drag the desired area with the mouse while holding down the Alt key. This selects only the shadow/shine objects in this area.
On the right-hand side of the toolbar you will find the magnet icon, which is activated by default.
The magnet helps you to position objects exactly in sync with one another. If you align image transitions manually, the magnet ensures that the start of the fade-in of one image snaps exactly over the start of the fade-out of another image. This functionality is very helpful when you are experimenting with the order of the images and moving them back and forth on the timeline. You can easily move the images to the correct position.
The magnet works in the same way in the sound tracks - so that you can also precisely align crossfades between sound samples here - and also between picture and sound tracks. The precise alignment of sound envelopes to light curves is therefore also possible.
If you want to work without the magnet function, simply press the Alt key when moving to temporarily deactivate the magnet. If you want to deactivate it completely, click on the icon so that it is no longer selected. You can then temporarily reactivate the deactivated magnet by pressing the Alt key.
The clipboard is a suitable tool when it comes to moving or copying sequences from one show to another or repeating a sequence elsewhere. To do this, use the Cut selection or Copy selection steps in the Editmenu item. One or more objects are stored in the Windows clipboard. They remain there until they are overwritten by other content or until Windows is closed. The difference is that the objects are removed from their original context when they are cut, whereas they are retained when they are copied. They can then be pasted from the clipboard into the same or another show using the Edit / Paste clipboardcommand. A prerequisite for the operation to succeed is that a suitable area (corresponding components with a corresponding number of tracks) is available. If this is not the case, a message appears.
There is another interesting application for the clipboard. Imagine you have been working on a show for a long time and at some point you delete a sequence that you didn't like at first. In retrospect, however, you realize that you would like to use this sequence after all. Instead of recreating it, you use the Undo function to go back the required number of editing steps and copy the sequence. It is then placed on the clipboard.
To avoid having to re-produce all subsequent steps, use Edit / repeat to return to the last created state of the show. You can now paste the desired sequence from the clipboard. Please note, however, that you should not make any changes during this process, as the saved repeat steps will be lost.
Groups of events can be easily formed into new tools using the Edit / Create macro command and stored in the tool window for repeated use. To do this, first select the desired objects within the show editor and then select the corresponding command from the command menu or the context menu (right mouse button on one of the objects involved). You then have the opportunity to give the new macro a name. This must be different from the names of any existing macros.
If a macro extends over several components, it is assigned to the event patterns of the component active during creation.
Existing groupings of events are also transferred to macros, but the event object property is not fixed.
Macros are saved when a show file is saved so that they are available again after the next load. The macros are also created within the configuration files(File / Save configuration as) so that they are available for new projects. Of course, macros can also be transferred from one show to another via the clipboard.
If a macro is used, an image of the events defined in it is created on the tracks. There is subsequently no connection between the events and the macro, so a change to the events does not affect the macro or other events created by the same macro. To change a macro, insert it, change the event objects as required and create a new macro from it, if necessary after deleting the old macro.
If you paste macros or previously copied content from the clipboard into a show, the track-related hierarchy of the objects is retained. For example, if you insert a macro that contains images on several tracks with overlapping content, its function and thus the visual effect is retained in any case. Of course, this arrangement can be individually changed later.
For the transfer to the clipboard (copy + paste) and the creation of macros, it makes sense in most cases to select complete units (event units such as light or sound curves). If only part of a light or sound curve is selected when a corresponding function is called up, a query enables the selection to be automatically extended to all units of which at least one individual object has been selected. If, for example, only a single fade-in handle of an image is selected, such a query appears automatically when the copy function is called up.
If additional curve handles have been inserted within an existing curve on the timeline, be it to reduce the volume of an audio sample or to temporarily reduce the brightness during a title fade-in, this group of handles can now be selected in isolation, transferred to the clipboard (Copy or Cut selection) and inserted in another curve. You can also create a macro from this constellation of objects. You can then insert this macro as an independent tool in existing curves in order to achieve an effect such as a reduction in brightness. If you want to copy such an incomplete object selection or use it as a template for a macro, the query about an automatic extension of the selection must be answered with No,
Otherwise, the object selection is automatically expanded to include the entire curve and all the controls it contains.
If you move one or more objects on the tracks while holding down the [Shift] key, the assignment of the objects to the respective track is retained when moving horizontally. For example, if you move several images to the right or left while holding down the [Shift] key, the images remain on their respective image track. This also ensures that the track-related hierarchy of several objects is retained. You therefore avoid unintentionally moving objects vertically between the tracks.
When moving vertically with the [Shift] key pressed, however, the temporal positioning of the objects is retained. For example, if you move an image one track up or down, it will remain exactly at its temporal position and will not be moved horizontally by mistake.
The handles of a light or tone curve can initially only be moved horizontally. This prevents the upper handle from being accidentally moved downwards when manually extending the fade-in phase of an image, for example. If you hold down the [Alt] key, however, you can also move the handles vertically.
You can also change this behavior in the settings: To do this, select theSettings - Program settings option in the menu and then theTimeline editor and pool tab. Then check the box next toAllow shifting the height of curve handles without the Alt key.
You can lock objects on the timeline to prevent them from being accidentally deleted or moved. To do this, select the relevant objects and then fix them using the Edit / Fix event(s) command. The objects are then displayed with a blue line around them.
Use the Edit / Detach event(s) command to detach fixed objects and then move or delete them again as usual.
You can combine objects into event groups to fix their relative position to each other, for example several images or images and sound samples. Events that belong to a group are displayed with a dark frame around the handles.
The Edit / Create event group menu command is used to create event groups. All event groups included in the current selection are exploded again using the Edit / Explode event group(s) menu command.
In certain cases, m.objects also creates event groups automatically, for example when you separate the sound from a video so that it is stored as a sound sample on an audio track. Video and sound then form an event group.
If you select an object in an event group, this selection is automatically extended to all other objects in the group. Moving, copying and deleting is then only possible for all objects in the group at the same time, with one exception: If an event group extends over several components, as in the case of video and sound, you can subsequently change tracks within a component.
For example, you can move the sound of the video to another sound track without affecting the alignment of the video in the picture tracks.
In the bars below the light curves of the images in m.objects you will find information on the fade-in, standstill and fade-out time of the respective image.
In addition, when clicking or moving objects to the right and left of the mouse pointer, m.objects displays the time remaining to the next object of the same type.
For example, if you click on an image field object on a light curve, m.objects will display the temporal distance to the previous image field object in the same image track on the left and the distance to the following image field object on the right. This is particularly helpful when it comes to positioning image fields or other objects at exact time intervals.
In many places in m.objects you will come across the orange arrow controls with which you can change certain values. Operation is as simple as it is convenient, as they can be used to make both normal and particularly fine-grained changes.
As an example, you can see extracts from the editing windows of the image field and 3D object.
You can see that the arrow controls have different shapes and orientations. These different appearances always refer to the respective value that can be changed with them. The arrow at the top of the image on the left, for example, represents the top edge of an image field, the position of which can be moved using the control. Width, height and size, on the other hand, are marked with double arrows. This means that two values change at the same time and in opposite directions, for example the position of the right and left edges of the image field when the width is changed. The 3D object on the right has curved arrows. They are used to change the angle of rotation. This means that an object is rotated, which changes its display accordingly.
To edit, click on an arrow and then hold down the left mouse button and drag in the direction shown. You can follow the changes continuously in the m.objects canvas. If you move the mouse in the opposite direction, you can change the respective value in reverse.
If you click and drag with the right mouse button instead, you will change the respective value much more slowly. This is how you carry out the 'fine tuning'.
The controls with an additional blue arrow are a special feature. This allows the two adjacent controls to be operated simultaneously.
Use the reset buttons to reset the respective values to the default.
Instead of using the orange arrow controls, you can also set the relevant values using the mouse wheel. The special keys Shift and ctrl/ctrl are used to change the respective value. The Alt key is used for fine adjustment - analogous to the right mouse button in the arrow control.
It is possible to automatically adjust individual or all properties for all objects stored on the timeline. To do this, you must first create a selection that contains all the objects to be changed, e.g. by dragging a frame. It is irrelevant whether objects of other types are also selected. Double-click on one of the objects or use its context menu (right-click) to open the corresponding object form. Enter the desired changes here and exit the form by clicking on the OK button. A selection list of all object properties of this object type then appears. The values that have just been changed are marked with an * and preselected. By selecting and deselecting individual properties, you can now define which of them are to be automatically transferred to the other objects.
For example, gamma correction can be applied to an entire group of images without affecting other filters that have already been individually set. It is also possible to modify the font of several selected texts in one go without changing the different color and font style settings.
In the introductory chapter of this manual In 12 steps to a live presentation, you have received compact instructions in just a few steps to help you create and present your own show. However, there is much more to the program interface than these first steps show. Not that it gets particularly difficult from here on, but it gets really exciting, because the possibilities that m.objects offers you are extremely varied. The following chapter will give you a comprehensive overview of the various functions and options for editing an AV show.
First of all, every AV show that you create with m.objects is simply a project. This could actually be the end of this chapter. But there are good reasons why it is not, as there are a few potential pitfalls to avoid, especially at the beginning of your work.
When you create an AV show from images, sound material and videos, you rarely have an exact idea from the outset of which photos will be superimposed in which order, how long the individual stand times will be, where you will insert videos and when you will use which sounds. Rather, an AV show is the result of a creative process, ideas are tested and discarded, new ideas emerge. You will delete photos that you initially used or replace them with others, and new ones will be added. The more extensive the show, the more data material you use.
You can probably guess what this means: without effective file management, sooner or later you would be faced with a data chaos that is almost impossible to navigate. The good news is that m.objects provides you with highly effective file management in the form of project directories and the program's internal file management. If you pay attention to a few points when creating your productions, you can concentrate on the actual creative process without any worries.
If you now want to create a new project, select File / New show from the program menu. The project wizard will then open.
Under Storage location, you first have the choice of saving the new show in a new or an existing directory. If you want to create different variations of a project, these should be in the same directory, as in this case the productions will use the same source material. To do this, click on the option Create new show in existing project directory. Use the drop-down menu to the right to select the desired directory.
It is best to create a show for a new theme in a new directory. In this case, select Create new project directory and enter a name.
In the lower part of the window, you can choose between the options Automatic configurationand Use existing configuration as basis. If you selectautomatic configuration and confirm the window with OK, simply enter the number of image and sound tracks and - if desired - the commentary and DMX tracks in the following Configuration Wizardwindow. You also specify the aspect ratio of the m.objects screen and whether it is always displayed, only displayed during playback (i.e. when the show is being played) or not displayed at all in full screen mode.
You can also make things as easy as possible for yourself at this point and click on theDefault settings button to pre-select a ready-made configuration. All entries you make at this point can be changed later.
If you select the Use existing configuration as basis option in the project wizard beforehand instead, you can select from a range of ready-made configurations via the drop-down menu.
The number of picture and sound tracks and the aspect ratio of the screen are specified here. You can also change these settings later if required.
Please note, however, that changes to the aspect ratio of the screen may require further changes within the production at a later stage, especially if you are working with complex animations. More on this topic follows in the chapter Setting the aspect ratio.
Whichever way you have chosen, clicking on OK opens the Create new show in project as ... window, in which you can give the new production a name and then confirm with Save.
If you have selected a new project directory, m.objects will now create it in the background. It automatically contains the subfolders Midi,Pic, Sound and Video. This is an important step towards a clear file structure, as these folders will later contain all the data relevant to the show.
Midi stands for music files that can be played back by a PC-integrated or external synthesizer, Pic for images, Soundand Video for sound and video files. As you work on your project, m.objects will automatically add further folders. These include the mob_Auto folder with the texture data - files derived from the original images that m.objects uses for the actual presentation. As a rule, you do not need to worry about the contents of mob_Auto, m.objects manages the files in it itself. If you delete this directory, m.objects will automatically create the required texture data from the original images again the next time the show is loaded, provided that these can be accessed. The .mos file, the actual core of the production, in which the structure of the show, transitions, standstill times, zoom effects etc. are stored, is also saved in the project directory.
The project folders are usually located in the m.objects Data / Show directory. You will save yourself a lot of potential errors and therefore time and effort if you leave the project folders as they are. Although you can add your own folders without hesitation, it is neither necessary nor sensible to change or even delete files or folders from this structure. On the contrary: incorrectly deleted or moved files may mean that m.objects can no longer play back a show correctly. This is because the program can only access the data for which it knows the storage location.
So if you remove an image from a show, you do not need to delete it here. As you will see in the next section, m.objects ensures in a very elegant and convenient way that there is no superfluous data in the project folder and gives you a good overview of the files used in your show.
Open the File selection in the program menu and select theManage media files option. This gives you an important m.objects tool on the screen that shows you at a glance which source material is used in your AV production.
The illustration above shows an overview of image and sound files from an m.objects project. All files are located in the project directory and are also used in the show. They are therefore either on the image and sound tracks (shown in green) or in the file pool (shown in blue), i.e. in the lightbox or, in the case of sound samples, in the tool window.
Files that are only available in the file pool can also be completely removed from the project via the file manager. To do this, select the relevant files in the list and click on theRemove object(s) from pool button.
The files are only removed from the file pool, but not deleted from the hard disk.
The individual file entries in the list are linked. Clicking on an entry takes you directly to the corresponding location in the image or sound tracks, the locator is positioned there and the light curve or sound envelope is selected. If you click on an image from the lightbox, it is opened and the image is displayed there.
The number of uses in the show is indicated before each file entry. So if you use files several times in your show, each click on the entry will take you to the next occurrence on the tracks or in the lightbox. This technique allows you to navigate easily and conveniently through your show and select individual files.
If you right-click on an entry in the file manager, the option Link to another file appears. Instead, you can also select the file with a mouse click and then click on the Link to another file button below the list to replace the inserted file with another one, which will then appear in the corresponding position(s).
In the illustration you can see that some files are marked in red and labeled as missing, used in show .
For files marked in red, m.objects does not know the storage location. One reason for this may be that the images, videos or sound samples have been moved to a different directory on the computer or originate from an external hard disk that is no longer connected. m.objects can - in the case of images - still access the existing textures. However, if the output resolution is changed, the absence of the original data inevitably leads to a loss of quality, as the optimum display cannot be derived from the original image again. As an additional warning, the image thumbnails in the light curves are shaded red and displayed with the label Source missing.
Missing videos or sound samples are not displayed at all in the presentation. There is no counterpart to the textures for these files that m.objects could fall back on.
To restore the missing link, click on the red file entry andthen on the Search for missing file button. If the files are on an external hard disk, make sure that it is connected.
In the following Openwindow, select the folder in which the corresponding file is located under Search in, select it and confirm withOpen. This corrects the link and the file is no longer marked as missing. If there are other missing files in the same folder, m.objects will now automatically reassign them correctly.
Files marked as external in the file manager should also be viewed with caution.
These are images, videos or sounds that are not saved in the project directory of your show. As long as it is ensured that m.objects can access these files and knows the storage location, there will be no problems during playback. To prevent possible problems, you should copy all the files used into the project directory - the easiest and safest way to do this is with the m.objects file manager.
If you use external files in your show or insert them into the show, a corresponding message appears once per working session.
Click on theStart file management now button(recommended). The Copy external files to the current project directory option is already preselected at the bottom of the file manager. You now only need to click
File operation
and m.objects will automatically copy the files to the project
directory. You will then receive a confirmation message.
If desired, m.objects can also move the files instead of copying them. To do this, select the option Move external files to the project directory. Please note, however, that in this case other applications may no longer be able to access the files at their original storage location.
The files marked in gray are not used on the video and audio tracks or in the lightbox or tool window.
This means they are ultimately superfluous and can be removed from the project directory. You should also leave this to the file manager, as it is easy to lose track of large productions. In the file manager, select the lowest option Export current show to a new project directory andexecute file operation. All unused files are ignored during this operation. Your AV show will then be available in 'cleaned up' form in the new directory. You can perform this action before saving the project to an external hard disk or CD / DVD to avoid taking up unnecessary storage space.
All fonts that you have inserted into your show in the form of text elements using the m.objects title editor will appear in the file manager. The file manager displays a separate branch for this purpose.
All fonts used on the computer are listed under the branch installed in the system.
If there is a branch here with the label Missing, this means that the font listed below is not installed on the computer. This can occur in particular if you have transferred a show from another computer on which this font is available. m.objects then uses the existing texture as with the images and still displays the text in the canvas. However, as soon as you make changes to the text or change the resolution of the canvas, for example, m.objects cannot create a new texture and can no longer display the text. You must then install the corresponding font or alternatively change the font in the text. The installation of a font is not possible in the file management, but must be carried out in the computer system.
There is one important decision that you should make at the very beginning of the new project: What aspect ratio should the show be created in?
While you can easily change all other program settings such as the number of tracks during the course of the project, subsequent changes to the aspect ratio should be avoided if possible. The reason for this is simple: if you change the aspect ratio from 4:3 to 16:9, for example, all inserted image field objects, all zoom centers and rotations will inevitably shift. Zoom movements may no longer describe the desired motion sequence across the images - in short: the show can only be corrected in the new aspect ratio with additional effort so that it runs in the desired manner. This is because you cannot avoid editing each of these effects individually. It is therefore all the more important to make some considerations in advance.
The aspect ratio of an AV show is essentially based on two criteria: the aspect ratio of the output medium (screen, computer monitor, TV screen) and the aspect ratio of the image material used. Of course, desired effects can also play a role, so that the decision is made in favor of an extremely wide format, for example. On the other hand, it is not always clear from the outset which output medium will be used. Or the images are available in different formats.
The decision will therefore not always be easy, and in some cases it will be worth accepting the additional expense of a second production in a different aspect ratio. A generally valid recommendation is not possible in this respect.
Once you have decided on a suitable aspect ratio, set this in m.objects. To do this, right-click on the canvas again and select Canvas settings in the context menu. Under Real-time renderer you will find the setting options for the aspect ratio. In addition to predefined values, you also have the option of entering your own specifications manually.
If images in your show deviate from the selected aspect ratio, they are displayed on the canvas in such a way that - depending on the format - black bars appear on the right and left or at the top and bottom. You canuse the Adjust aspect ratiowizard to make the necessary corrections. You can read more about this in the chapter Wizard: Adjust aspect ratio.
The canvas is one of the central components of m.objects. This is where you can see the editing steps and changes as you create your show. You can open the canvas using the corresponding icon in the toolbar or in the menu under View / Canvas. You can display the canvas in full screen, as a separate, smaller window or docked in the workspace. The latter setting offers the advantage that the canvas does not cover other parts of the workspace when editing, which is particularly practical when working with just one monitor.
When the screen is docked, you can switch to window mode by double-clicking on the double bar in the frame of the screen. The canvas can now be moved and positioned as required. Double-click on the bar again to dock it back into the interface.
You have already seen that a context menu appears when you right-click on the canvas. Here you will find important functions for operation.
You can control the playback of the show with Stop, Pause andPlay.
In the canvas settings, you first define the aspect ratio of the canvas, as described in the previous section. By clicking on the Optimize for full screen button, m.objects then calculates the output resolution to match the output device that is used to display the canvas in full screen. For example, if you have selected an aspect ratio of 16:9, a resolution of 1920 x 1080 pixels will result for a FullHD monitor, while an aspect ratio of 3:2 will result in a resolution of 1620 x 1080 pixels.
You should only activate the Use device's target color space option if you have calibrated the output device used and a corresponding profile is stored on your computer. This option is not selected by default so that sRGB is used as the color space for displaying the images - a setting that can be used universally and ensures a high display quality.
The following option Soft saturation with additive mix should not normally be selected. This is only useful if you want to work artistically with additive image mixing in special cases and want to avoid overexposure in areas of the image that contain bright parts in several of the media displayed at the same time.
The smoothing of image field edgesaffects images that are displayed rotated. If you deactivate smoothing here, the edges of the image appear pixelated and clear steps can be seen.
Activated smoothing, on the other hand,
ensures cleanly displayed picture edges:
The indicators that you can set under this are described in detail in the Speaker Supportchapter. The operation and functions of the Cutout and Splittabs ,Stereoscopy and Post Processing can be found in the chapters Multiscreen and Softedge, Stereoscopy with m.objectsand Global Color Grading (Post Processing).
In the context menu of the canvas, you will find the option Show graphic information under the canvas settings. This allows m.objects to display information on the resolution of the canvas, both for full-screen display and for the current reduced display. You will also find the current frame rate here, which should correspond as closely as possible to the frame rate required by the screen, projector or TV set - usually 60 or 50 fps. Significantly fluctuating values indicate that the system and especially the graphics card are reaching their limits.
Below this are the options for guides. Here you have the option of inserting horizontal and vertical guides to make it easier to align images and arrange them on the canvas. You can simply move these guides with the mouse and position them appropriately. The option Show guides and image fieldsdisplays or hides the guides on the canvas. The option Magnetic guides, which can be found in theGuides submenu, helps to align objects on the canvas. If you no longer need a guide, simply move it out of the canvas.
In this context, it is particularly helpful to use the Guideline Wizard, which allows you to arrange image elements in the canvas easily and precisely. You can find more information on this in the chapter The guide assistant.
The Select full-screen output device item is important when using two or more output devices, e.g. monitor and digital projector.
Another menu will now open where you can select the device for displaying the screen in full screen mode. If you do not make a separate selection, the upper option 0: like screen window applies. The screen is then displayed in full screen where it was previously visible as a reduced window. 1 is usually the primary screen with the desktop (this does not necessarily have to be the case, if in doubt, please try it out), 2 is the extended screen. Next to it, you can see which graphics card is used to control the respective device. In the above constellation, the screen would usually be option 1 and the digital projector option 2, which you then also select for the full screen.
Use full screen mode to switch between full screen and reduced screen.
In particular for applications with only one screen, the options Show screen as window in stop mode and Hide screen in stop mode follow. If you select one of these, the canvas will be displayed as a window or docked or hidden as long as the show is in stop mode. This allows you to work on the show. As soon as you switch to Pause or Play, the screen switches to full screen mode. With Stop, it returns to window mode or docked status.
The tool window is one of the central components of the m.objects user interface when it comes to editing your Multivision. If it is not currently open, you can find it in the menu under View / Tool window.
The tool window is context-sensitive. This means that different objects are displayed here depending on the selected component. For example, if the time ruler is clicked, you will see tools such as single marker, wait time or index/skip marker. If, on the other hand, the image tracks are clicked, you will find dynamic objects such as zoom, image fieldor 3D animation in the tool window. This also applies to the audio tracks and, if used, to commentary tracks and other components such as lighting control. In addition to the tools, you will also find macros, media files, fades and sound effects here, depending on the active component.
To apply an object from the tool window, hold down the left mouse button and drag it onto the object to be edited in the tracks. For example, you can drag a zoom object from the tool window onto a light curve and drop it there.
The tool window can be displayed either as a tree structure (see image on the left) or as a list(see image on the right). To select, right-click in the window and selectWindow layout in the context menu.
In the case of the list view, select the option for a multi-column view and the column width here if required. There is also the option here for both display formats to place selected favorites in front of the other tools. Then confirm your selection with OK.
Define the tools that you work with particularly frequently as favorites so that you always have them to hand. To do this, right-click on the desired tool and then click on Favorite. This is then highlighted in dark and placed in front of the other tools if you have previously selected this option. You can also remove the marking as a favorite in the same way. If you define generally available tools such as Zoom or Wait marker as a favorite, this automatically applies to all shows, while the favorite status of individual tools such as macros or audio media is saved in relation to the current show.
The tree structure displays the m.objects tools sorted by function in subcategories that can be expanded and collapsed for a compact view. The selected favorites are superordinate to these and remain visible even when the subcategories are collapsed. In this display format, it is advisable to place the tool window to the right or left of the tracks.
The display of the tool window as a multi-column list is particularly suitable for positioning above the tracks.
m.objects automatically creates space when inserting new media and - if necessary - automatically moves the following content to neighboring tracks. This simplifies your work on a production enormously, as you do not have to manually create the required space on the timeline or rearrange the following objects. It does not matter whether you drag the new content from the lightbox or the Explorer or Finder into the timeline, paste it from the clipboard (copy+paste) or use macros you have created yourself.
You can also deactivate or modify this function in the program settings under Settings -> Program settings and there under Timeline editor and Pool. For example, you can set here so that intelligent insertion is only effective on the image tracks and not on the audio tracks.
You can also deactivate this function on a case-by-case basis by pressing the Shift key when inserting.
The handling of intelligent insertion is very simple: You simply drag the object or objects to be inserted onto the transition between two existing light or sound curves. m.objects now displays the intelligent insertion with a rectangular frame.
As soon as this frame appears, release the mouse button and m.objects will insert the new media files. The fade-in and fade-out times are automatically adjusted to the existing sequence, and if the track assignment of the following media needs to be changed, m.objects will rearrange as many as necessary.
Depending on the
position of the Selection button in all components
on
the far right of the toolbar, m.objects performs the necessary shifts
in all components (e.g. sound, comments, time ruler, etc.).
Inserting before the first or appending after the last image of an existing sequence can also be done in the same way, by placing it on its first fade-in or last fade-out. Adding individual new images in this way also makes it unnecessary to synchronize the fade-in and fade-out manually.
m .objects automatically closes any gaps that occur when media is deleted and, if necessary, automatically moves the following content to neighboring tracks. As with the intelligent insertion of media content, you also have the option here of modifying or switching off this function under Settings -> Program settings ->Timeline editor and pool or deactivating it on a case-by-case basis by pressing the Shift key.
Similar to the function for inserting new content, m.objects has a smart solution for deleting individual or several consecutive media from existing sequences. Whether a gap is automatically closed after deletion and subsequent objects are used depends on the context: if you delete a title that lies above an image sequence or a video, for example, this does not lead to an unwanted shortening, nor does the removal of an image curve for which no direct connection to other curves was recognizable on the left or right.
If necessary, m.objects automatically rearranges the subsequent objects on the tracks and adjusts the fade-in and fade-out times.
You can import images, videos and sound files directly from the macOS Finder or Windows Explorer into the m.objects timeline. The Finder or Explorer can therefore be opened directly from the m.objects user interface via the context menus of the image and sound tracks. To do this, right-click in an image or sound track and select the option Select images in Explorer / Finderor Select sound file in Explorer / Finder. m.objects then opens the subdirectory of the current project intended for this media type in the Finder or Explorer, or the directory from which media of this type were last transferred to the presentation during this session using drag & drop. You can now import the desired files into your presentation.
m.objects alternately distributes media newly imported into the timeline to two tracks. This means, for example, that when several new images are inserted, they are distributed across two adjacent image tracks, even if there are other image tracks available.
In insert mode, i.e. with intelligent inserting (see above), these are always the two tracks last used for a picture change, otherwise two neighboring tracks.
You can toggle this function in the program settings(Settings -> Program settings-> Timeline editor and pool) completely or by pressing the Alt key.
By default, m.objects will return images, titles or videos that you delete from the image tracks to the Lightbox. Audio clips that you delete from the audio tracks remain in the audio pool, i.e. they still appear in the tool window when the audio tracks are active.
This procedure can be changed in the program settings. To do this, select Settings → Program settings in the menu and then theTimeline editor and Pool tab. You will now find the pool settings in the lower part of the window. Here you can differentiate between images, video clips and titles to determine whether or not they should be put back into the lightbox. If this is not desired, remove the tick in front of the relevant entry. The same applies to audio clips that should no longer appear in the audio pool after deletion. Then confirm your selection with OK.
Especially with large productions, i.e. when importing numerous images, video clips or sound files, it can easily happen that duplicates creep in unintentionally. This is where m.objects can provide an effective remedy.
Regardless of whether you import media into the lightbox or timeline via the internal file selection or via drag & drop, e.g. directly from the Explorer or Finder, m.objects will display a corresponding prompt if duplicates are detected.
You therefore decide whether m.objects should suppress the duplicates or also import them. For example, you can drag and drop an entire directory onto the lightbox, which you assume contains individual images not yet used in the presentation or on the lightbox. By filtering the duplicates, only the unused ones are effectively imported.
Another option is to remove duplicates from the lightbox. To do this, right-click in the lightbox and select theRemove duplicates option. m.objects will then remove all excess occurrences of media that are on the lightbox more than once. If there are media on the lightbox that are also on the image tracks, you can also decide here whether they should be removed from the lightbox.
m.objects recognizes media already used several times on the image tracks using the Show / mark duplicates command from the context menu (right-click in the image tracks). If duplicates are found, the program displays all multiple occurrences in a list.
By simply clicking on an entry, the locator jumps to the corresponding light curve. Click on the Mark selected duplicate button or alternatively double-click on the entry in the list to mark the media file in the timeline: The light curve of the image or video is marked with a hatch and the label Duplicate.
This makes it easy to find the duplicate and you can delete it later or replace it with another media file.
Duplicates are initially media whose complete file path is identical. If media only have an identical file name but are located in different directories, they are classified as duplicates if the content is identical. The system also checks for similar file names and content identity. Similar in this sense are, for example, the file names _DSC3498.jpg and copy of _DSC3498.jpg or also _DSC3498 (2).jpg, simply put, if one of the compared names is part of the other. This also recognizes duplicates whose file names have been automatically changed by duplication.
However, the search for duplicates on the image tracks does not classify identical media as duplicates if they were created overlapping on the timeline, which was usually done deliberately, for example to fill the background with a different aspect ratio. Similarly, masks are never classified as duplicates, as identical masks are often used several times within a presentation. Video clips on the timeline that show different sections of the same file are of course not considered duplicates either.
When importing photos and videos, m.objects has taken the sensor position into account during recording since program version X-2024, so that manual rotation of the content is not necessary.
Photos and videos from smartphones as well as portrait format images from more modern cameras often appeared rotated in older m.objects versions, as they always saved the image content in the same way and only added meta information about the sensor position to the file. The default setting for the orientation of newly imported photos and videos is now automatic detection and consideration of the sensor position. However, as older shows in which the necessary rotation may already have been performed manually must remain unchanged, this rotation is not performed in material already integrated into the lightbox or timeline.
The setting for evaluating the sensor position can be found in the properties form of the respective image or video under the Rotation option.
You can see the sensor position recognized by m.objects during import in the info box at the top right.
If there is a question mark, this means that the image or video has already been imported with an earlier version of m.objects. To recognize and take into account the correct sensor position, you can click on theCreate new preview images button in the form. To update several or all preview data of a show, you can alternatively select the desired media and call up the option Recalculatedisplay/texture in the context menu (right-click on the bar under one of the light curves). The recalculation then takes place in the background while you continue working.
All images and videos that were imported with an older m.objects version have the setting Do not rotate under Rotation, so that the display remains unchanged in newer versions. As of version X-2024, m.objects sets newly imported media to automatically rotate according to sensor position. You can change this setting at a later date (even for several images at the same time), for example, to have m.objects handle manually corrected images and videos accordingly and remove the previously manually made adjustments to the rotation/zoom/image field objects. The advantage of this procedure lies in the more consistent handling of image fields and rotation objects, as these then act naturally and are not rotated.
One of the first steps in the production of an AV show is to insert images into the picture tracks.
All images are initially inserted into the image tracks with predefined stand and fade times. You can of course change these as required when editing the presentation. The values for this standard default are defined in the *Standard tool, which you can find in the tool window. If it is not visible there, simply click anywhere in an image track.
Double-click on the object to open its editing window, where you can change the time settings individually.
Below this, you will find the optionsUse overlapping image blending for images andUse overlapping image blending for video clips. m.objects uses additive image blending for images and overlapping image blending for videos by default. As a rule, it is recommended that you work with these defaults. However, you can change them here if required. It is only advisable to insert images in overlapping mode in exceptional cases, as this can make it more difficult to handle transitions, especially if the assignment of tracks is changed. Inserting video clips in additive (instead of overlapping) mode can simplify working with transitions at the editing positions. You can find out more about image mixing in the corresponding chapter.
You will also find theAdjust aspect ratio (automatic image field) option here. With this option, you can ensure that images that deviate from the aspect ratio of the canvas are still integrated to fill the format - for example, if you insert an image in 3:2 format into a 16:9 canvas. To do this, m.objects inserts a zoom object if required, which automatically scales the respective image to the exact size required to fill the canvas completely. If you change the aspect ratio of the canvas later, the zoom factor is automatically adjusted accordingly. You can specify whether this adjustment should always take place or only after confirmation.
You can read more about this in the Zoom object chapter.
The speaker preview option (proxy image display) ensures that the respective image is displayed in the speaker preview. You can find more information on this topic in the chapter Speaker preview for live presentations.
Individual images that are already on the image tracks can simply be swapped for other images using drag and drop. The light curve information such as fade-in and fade-out, stand time and any existing animations are retained in full.
The thumbnails (image thumbnails) in the light curves have a handle at the top left. If you click on this with the left mouse button, the mouse pointer turns into a curved double arrow symbolizing the possibility of exchanging.
If you hold down the mouse button and drag the thumbnail to another light curve and release the mouse button, the two images swap places. In this way, the order of the images in the presentation can be changed directly on the image tracks.
Similarly, you can also drag an image from the lightbox onto an image in a track so that both are swapped. The image from the lightbox is now in the light curve, the other in the lightbox instead. You can also drag and drop another image from Windows Explorer or from a selection in Photoshop onto an existing light curve in this way.
If you want to swap several images at the same time or assign other images to the contents of all light curves, use the extensive options of the magazine editor, as described in the following chapter.
The RAW format is very popular with professionals and ambitious amateurs, as it does not involve any loss and saves the largest possible amount of image information. Image corrections and alterations can therefore be made easily and with potentially the best achievable quality. On the other hand, RAW is not a standardized format: each camera manufacturer has its own RAW specifications, and different camera models from the same manufacturer also deliver different RAW files, depending on the image sensor used. It is therefore not possible to integrate all RAW specifications into m.objects and keep them up to date at a reasonable cost. For this reason, images in RAW format cannot be integrated directly into m.objects, but must first be saved in a different format using suitable software (camera software, Adobe Lightroom, etc.). The most suitable formats are jpg and tiff. The jpg format compresses the image data effectively with relatively low losses (depending on the set quality) and produces relatively small file sizes, while tiff works losslessly but produces quite large files.
A simple method of inserting images into an m.objects presentation is to use Windows Explorer. Open the Explorer (e.g. with the key combination Windows key + E) and then select the desired images directly in the file directory. To do this, select the images with the mouse and then drag them to the image tracks in m.objects by holding down the left mouse button. As soon as you have reached the position where you want to save the image material, release the mouse button.
The service lives of the individual images then correspond to the specified standard service life and can of course be changed individually. You can find out how to do this in the rest of this chapter.
Using Windows Explorer, you can of course also integrate video material into m.objects in the same way as still images.
If you would like to insert all images within a folder into your show, you can also simply drag the folder symbol onto the tracks. Any existing subfolders will also be taken into account.
Please note: If you insert images directly from a folder in the file directory into your show, they are still outside the m.objects project directory. A corresponding warning will also be displayed on the screen when you insert them. As already described, however, it is advisable to copy the images to the project directory so that they are still available when the show is exported later or if the original files are moved. To do this, simply open the file manager in m.objects(File / Manage media files) and select the option Copy external files to the current project directory in the lower section.
The red dot on the right below the picture and sound tracks allows you to quickly insert files into your show.
Click on the red dot below the image tracks and select the Insert images option. A selection window opens showing you all the image files in the Pic folder of the project directory. You can also use the drop-down menu at the top to select any other folder in your computer's file directory and choose the desired image material from there. In the preview window, the selected image is displayed in reduced form.
Select the images you want to insert and then click on open. If you now drag the mouse pointer over the image tracks, you will notice that the images 'hang' from the mouse pointer. Find the appropriate place in the show where you want to insert the images and left-click. Only now will the images be inserted.
If you want to insert images within an existing image sequence, drag the mouse pointer onto an existing crossfade when inserting. m .objects then inserts the images intelligently by automatically creating the required space itself. The following content is moved back accordingly and, if necessary, redistributed on the tracks. A detailed description of this function can be found in the Intelligent insertion of media chapter.
The m.objects lightbox offers you the most convenient way to insert images and videos into a presentation because, like the classic model from slide photography, it allows you to view images, compare them with each other and pre-sort them in the appropriate order. This can save a lot of time and effort when editing the show later on.
In the toolbar, you will find the corresponding icon directly next to the icon for the screen. Clicking on it (or in the View / Lightbox menu) opens the lightbox and initially presents several rows of empty image compartments. Like any other window in m.objects, you can place the lightbox anywhere on the screen and zoom in and out as required. The display size can also be varied: Right-click in the lightbox, under the menu item Display you will find the settings small, medium, large and extra-large.
First double-click on an empty picture box on the lightbox. The file selection menu appears. Select the desired images by highlighting them and then confirm with Open. The selected images will now appear in the lightbox. You may also see a warning that some of the images are outside the project directory. What you should do in this case is already described in the File managementchapter.
By holding down the left mouse button, you can move the images on the lightbox to other positions as required and thus define a rough sequence in the first step. Of course, you can also move several images at the same time by first selecting these images while holding down the Shift key. If you want to swap one image with another, simply move this image onto the other and release the mouse button. Both images have now swapped places.
If you want to move an image between two other images, hold down the left mouse button and drag the image in question onto the bar between the two images until an arrow symbol for inserting appears. Now release the mouse button and the picture has taken its new place.
You can also replace an image in the lightbox with a new one: To do this, right-click on the image in question and select Load image file(s). Select the new image and confirm again with Open. The original image has been replaced by the new one.
You can also sort images and videos in the lightbox according to certain criteria. To do this, you will find the Sort... option in the context menu of the lightbox. Clicking on this opens a window with a selection menu that offers you a whole range of different sorting criteria, from the recording date andexposure time to the frame rate for videos.
Select the desired option here and confirm with OK. For example, you can use the Aspect ratio option to easily sort the images in the lightbox according to portrait and landscape formats.
You can quickly and easily enlarge individual images in the lightbox by moving the mouse pointer over them and pressing the Shift key (capitalization key).
If you have loaded a video into the lightbox, the first frame from the video is displayed in this way.
One particularly helpful function of the lightbox is the test crossfades. To do this, open the screen and position it so that you have a view of both the lightbox and the screen. You may need to reduce the size of both windows slightly. In this case, it is convenient to work with a second screen on which you can show the screen in full-screen mode. If you now drag an image in the lightbox with the right mouse button onto another image and hold the mouse button down, you will see the cross-fade effect between these two images on the canvas. In this way, you can easily assess how suitable the images are for a crossfade or whether it is better to choose a different combination without having to integrate the images into the show. To simulate different fade times, there are several options to choose from in the context menu of the lightbox (right-click) underTest fade time.
Once you have finished sorting, the next step is to import the images from the lightbox into the m.objects show. To do this, first make a selection by marking the relevant images. You can select a single image by clicking on it. If you want to select more images, hold down the Shift key (upper case) and click on another image to select all the images in between line by line. Alternatively, hold down the ctrl key to select further individual images with the mouse. After you have selected the last image, hold down the mouse button and drag your selection to a free space in the image tracks or to an existing crossfade between two images and release the mouse button. m.objects will now insert the images at the desired position.
If you have not dragged all the images into the image tracks, the rest will remain on the lightbox. You can easily close the resulting gaps by right-clicking in the lightbox again and selecting the Clear lightbox option.
Alternatively, right-click in the lightbox and select the Automatically arrange images option. The images are then moved to a new row according to the displayed width of the lightbox if required. The images are also automatically moved together so that all gaps in the lightbox are closed.
Images that you delete from the image tracks are placed back on the lightbox and inserted there in the sequence of images at the bottom. If you delete several images from the tracks at the same time, they will retain the previously assigned order in the lightbox. In addition, deleted images are automatically assigned to the keyword deleted from the timeline. You can read more about keyword management in the following chapter.
With m.objects keyword management, you can significantly expand the functionality of the lightbox and turn it into a storyboard for your production. Keywords help you to maintain an overview even in large productions with extensive image and video material.
You can create your own keywords in m.objects and organize them hierarchically. If you work with Adobe Lightroom to edit your images and also assign keywords there, these can be easily imported into m.objects and supplemented with additional keywords if required.
To be able to work with keywords, theUse keywords option must first be activated in the context menu of the lightbox.
Now you can start assigning your own keywords. The sorting function of the lightbox (see above) can also be useful here. For example, sort the images by aspect ratio so that the portrait formats are displayed first, followed by the landscape formats. Now select all portrait format images and right-click in the lightbox. Select the Assign keywords option here. The corresponding window opens.
If no keywords have been assigned yet, it shows an empty window, otherwise the existing keywords are shown here in their hierarchical order.
Click on the New button and enter the name for the keyword in the following window, in this case portrait format, and confirm with OK.
The new keyword now appears in the window. Click OK again.
In the context menu of the lightbox, you will also find the Filter by keywords option. This is used to select the display of the images or videos in the lightbox using the existing keywords so that only the images / videos that are assigned to the selected keywords are displayed. You will see two numbers in brackets behind the keywords. The first shows how many images / videos are assigned to the keyword, the second how many of them are used on the image tracks.
In this window, clickDeselect all so that no keyword is selected. If not available, check the option Show elements without keywords. If you now confirm with OK, the portrait format images in our example will disappear from the lightbox and only the landscape formats will be displayed. Select these and assign a new keyword with the name Landscape format using the Assign keywordsoption as described. Then call up the Assign keywords option again and create the new keyword Aspect ratio.
The three self-created keywords Portrait, Landscape andAspect ratio are now arranged one below the other in the list. To create a hierarchical structure, use the mouse to drag the Portrait keyword to the Aspect ratio entry. This moves Portrait to below Aspect ratio and is therefore subordinate to this keyword. Proceed in the same way with Landscape format.
In this way, you can now create additional keywords for specific topics or motifs, for example, assign them to the corresponding images and structure them hierarchically if required. In the Filterby keywordswindow, you can then select specific images according to these keywords when producing your show. To do this, place a tick in front of the desired keyword.
Here you will also find the options one available, all available and exactly as selection. If you have a large number of keywords, you have the option of modifying the selection of images accordingly: An image must therefore either be assigned to at least one of the selected keywords or all keywords, or it must correspond exactly to the specified selection.
You can also rename or remove keywords here or delete all unused keywords from the list. The Show deleted from timeline option, if activated, ensures that all images that you have removed from the image tracks during production appear in the lightbox alongside the selection made.
If you want to use the lightbox without keywords again, simply deselect the Use keywords option in the context menu.
m.objects saves the keywords within a show. If you copy images from one show to another, m.objects copies the keywords there.
Adobe Lightroom is one of the most frequently used RAW converters and programs for managing and post-processing images. Here, too, there is the option of assigning keywords.
When importing images into the Lightbox, m.objects offers to import keywords from Lightroom. After calculating the textures, m.objects automatically opens the Import keywords window, in which all existing keywords appear in a list.
Here you can now choose to import all of these keywords, none of them or a selection of them into m.objects. Initially, all keywords are selected; if necessary, remove the check marks from the keywords that you do not want to import. Click on Deselect allto remove all check marks.
The selected keywords are then added to the keyword list of your project; a hierarchical structure created in Lightroom is also adopted by m.objects. The keywords can then be further processed as described in the previous chapter.
This import option is particularly practical if you already manage a large number of images with Lightroom and have worked intensively with keywords. This ensures a seamless workflow from shooting via Lightroom to production in m.objects.
You can also deactivate the use of keywords in m.objects. This means that no corresponding feedback is displayed when importing images that already contain keywords.
To do this, selectSettings → Program settings in the menu and then theTimeline editor and Pool tab. At the very bottom of the window, you will find the option Never use keywords for objects in the lightbox. If you tick the box here and confirm with OK, the use of keywords in m.objects is switched off globally. To reactivate them, remove the checkmark accordingly.
If you work with many image tracks, the preview magnification in m.objects offers a convenient way of providing a better overview.
To do this, simply drag the mouse pointer over the light curves of the images while holding down the Shift key. The corresponding preview image is enlarged at the respective point so that you can quickly find individual images even in more complex productions.
Important Exif data for the image is also displayed under the enlarged preview, provided it is saved in the image: Date, time, camera model, lens, exposure time and aperture.
The magazine editor is only useful for working on digital presentations in a few cases; its functions are primarily designed for working on presentations for classic slide projection.
To open the magazine editor, select View / Magazine editor in the program menu.
In contrast to the lightbox, it displays the exact content of the image tracks. This means that you will find as many rows in the magazine editor as your presentation has image tracks. All images on the tracks are displayed in the exact sequence.
The magazine editor is of particular interest if you are switching from analog to digital presentation. As a rule, you have scanned the images for your slide shows in relatively low resolution and integrated them into m.objects. The first step is therefore to digitize the slides. As soon as these are available as high-resolution digital images, proceed as follows: After double-clicking on an image in the magazine editor, the corresponding counterpart with the high resolution can be assigned via the Search button. The stand times, length of the fade-in and fade-out and sound remain from the analog show and are simply adopted. So instead of recreating the entire presentation, you simply replace the images.
The image mix has a decisive influence on how several completely or partially overlapping images are displayed in the m.objects canvas. This applies, for example, to crossfades where one image fades out and the next image fades in at the same time. This also applies to picture-in-picture displays, where a picture is displayed in front of a background picture.
m .objects always builds the display in the canvas starting with the lowest image track, similar to the processing of layers in Adobe Photoshop or many other image processing programs.
m.objects generally uses the additive option as the default mode. This is very suitable for smooth transitions from one image to the next. Additive image blending is also the right setting for special effects such as the artistic blending of image content.
In additive mode, the current height of the light curve therefore defines how much of the brightness of the corresponding object is added to the tracks below it. This means that overexposures can also be realized in a targeted manner.
In overlapping image blending, on the other hand, one image covers an underlying image. This type of image blending is used, for example, for picture-in-picture montages or for texts that you create with the title editor.
In this mode, the light curve defines the opacity of the object over the underlying tracks.
The light curve of an image is also marked accordingly: A dark yellow light curve clearly indicates that additive mode is selected. If, on the other hand, the light curve is green, the overlapping mode is set here.
The two previous screenshots each show a crossfade process. In the first screenshot, both images are set in additive mode, i.e. one image is faded out while the second is faded in. In the second screenshot, the top image is set to overlapping mode. As the brightness values are not added here, the lower image must already be fully faded in when the fade-out of the image above begins. The result on the screen is identical in both cases.
Below the light curve of each image you will see a bar with the image designation as well as information on stand and fade times.
Double-click on the light curve of the image to open the Edit image window.
Here you will find a wealth of information and setting options for the respective image.
At the top of the window you will see the name of the image, which m.objects takes from the name of the image file. You can also enter your own name here, which will then be displayed under the light curve in the image track.
Below this you will find the image filewith the file name and file path. You can use theSearch button to link another image with the corresponding light curve if required.
The Title generatoroption is automatically selected if the image in question is a text that you have created using the m.objects title editor. The Open editorbutton then takes you to the title editor, where you can edit the text. If the image is not a text, this button is inactive. You can read more about creating texts in the chapter Creating texts with the title editor.
At the top right of the window you will also find information on the resolution, aspect ratio, memory size and color profile of the image.
The Isolate image layeroption allows you to isolate individual layers from Photoshop files, i.e. files with the extension .psd, so that all other layers are hidden.
If a psd file has several layers, enter the value for the desired layer under No.. The bottom layer of the image in Photoshop has the value 0, the layer above has the value 1and so on. The option Keep layer position from overall image is preselected. This means that, for example, a cropped shape is displayed in the same size and position as in the original image. If you uncheck this option, however, m.objects will display the cropped shape to fill the format. If you isolate an image layer, you should also select the Overlapping option underImage blending and the Alpha channel value underTransparency. This allows m.objects to process any transparency information and display the image accordingly.
Below this, you will find the optionSuppress zoom/tilt filter of the renderer.
The zoom/tilt filter is used to avoid flickering effects during 3D animations or extreme zooming by reducing the sharpness of the image. If the image sharpness is to be maintained in such cases - especially with static applications - or if flickering is to be accepted, the filter can be switched off with this option.
The following area in the editing window relates to image mixing. Further information on this can also be found in the chapterImage mixing on page 86.
The overlapping image blending mode offers the option of working with partial transparency (clipping).
The following screenshot shows an image in an image track in which the sky is to be cropped.
To do this, the overlapping mode must first be set. In this case, select theDefine color tone option in the selection menu underTransparency.
To select the sky, first click in the box next to Pipette. Then move the mouse pointer onto the canvas, where it takes on the shape of an eyedropper, and then click in an area with a medium shade of blue.
The sky is then no longer displayed, but instead a checkerboard pattern is shown as a symbol for transparency.
If the cropping was not completely successful or perhaps an area of the rest of the image has also become transparent, you can still adjust the result using the tolerance slider.
This technique is also used in connection with chroma keying, which is primarily used to cut out video content in front of a monochrome background (bluebox or green screen process). You can read more about this in the chapter on real-time chroma keying.
If you select the Define color option instead of Define hue, only the area that corresponds exactly to the selected color will be cropped. You can use the white and black options to crop white and black areas of the image. Here too, the tolerance slider may help to optimize the result of the clipping. The Brightness option crops according to brightness values in the image and not according to a color or hue. Finally, Alpha channel offers you the option of using transparency information that is already contained in an image. The image has then already been cropped in another program such as Photoshop.
TheReverse transparencyoption swaps the cropped and uncropped areas in the image. For example, you can create a text with the title editor, add a suitable color and then use Reverse transparency to ensure that an underlying image is only displayed in the contours of the text. Find out more about working with the title editor in the chapter Creating texts with the title editor.
Under Image blending you will also find the options Image mask and Effect mask. Detailed information on working with masks can be found in the Masks chapter.
The image editingarea offers you extensive options for correcting and post-processing images and for applying a range of effect filters.
You can find a detailed description of this in theImage processing chapter.
You can find more informationon the option to treat as a mounted stereo image in the chapter Stereoscopy with m.objects.
ICC Color Management is activated by default in m.objects and should not normally be switched off. However, should this be necessary in exceptional cases, you have the option of suppressing or forcing color management here.
You can read more about ICC Color Management underColor Management and Calibration.
Finally, in the bottom line of the editing window, you will find the options Single imageand Overall image for Locator.
With Single image, you can display only this image in the canvas while editing the image, for example if you want to crop a part of the image, without the other image tracks being displayed. With the Full imageoption for Locator, on the other hand, the image appears in the canvas in its entire context, i.e. including all images in the other tracks that are visible at the current position of the locator.
With the QuickBlending options, you can vary image transitions in m.objects in a variety of ways and add a range of freely adjustable effects. The application is very simple and takes place directly in the existing image sequence on the image tracks. You can either change certain image transitions or apply the QuickBlending effects to individual images and specifically modify their fade-in and/or fade-out. All fade-ins and fade-outs edited in this way are immediately recognizable in the light curves by the dark yellow display.
To change a fade between two images, simply hold down the left mouse button and drag a frame around the handles of this fade so that they are highlighted. If you now right-click on one of the selected handles, you will find the Fade entry in the context menu.
Select this entry to access the options for the QuickBlending effects.
Alternatively, you can also find the QuickBlending effects in the tool window under Fades. In this case, drag the desired fade with the mouse onto the corresponding blending phase.
You will now see an animated preview of the type of crossfade you have just selected in the canvas, allowing you to immediately check the effect of the effects described below and adjust the parameters accordingly.
Under the Fade typeoption, you will now first find the Standard entry. This stands for the default fade, i.e. fading out one image and simultaneously fading in the next. If you click on this entry, a selection menu opens with a range of further options.
As soon as you have selected one of these, you can also edit the other options in this form. With Wipe, you can create wipe fades as image transitions that run either from one side across the entire image(one-sided wipe) or from both sides to the center of the image(two-sided wipe). In the form, the Reverse wipe option is also initially activated. If you check the Reverse fade-in checkbox instead, the movement will run in the opposite direction.
Use the square orcircle/oval options to create panels in the corresponding geometric shapes.
The bluroption can be used to blur the edges of the apertures. Use Center horizontal and Center vertical to move the starting point of the aperture animation to suit the subject. In the image example above, the center has been moved a little to the bottom left so that the aperture animation starts exactly above the beetle. A circle/oval is selected as the aperture type here.
If you select the option Match shape/direction to aspect ratio of image, i.e. check the box here, the aperture shape will adapt to the aspect ratio of the image, i.e. it will be stretched here.
You can also use the Alignmentoption to rotate the selected aperture as required.
You can further modify the effect under the Divisionoption. This allows you to divide the aperture pattern into a freely selectable number of tiles or concentrically arranged shapes, whereby you also have the option of rotating either the entire grid created in this way or each tile individually.
In another example, the Tiles, concentric option is initially selected here.
If you enter the optionTiles, concentric, rotate grid instead, the entire grid with the tiles will be rotated by the value selected under Alignment.
You can also add a gradient fade to the animation. To do this, check the corresponding option. The gradient fade is applied like a gradient over the entire crossfade so that it does not run evenly over the entire image, but instead runs from the defined center to the outside or vice versa from the outside to the center.
You determine the extent of the course by entering a value in the input field or using the arrow control.
If you select soft acceleration/deceleration of the animation, the fade-in or fade-out no longer runs at a constant speed, but starts slowly and then slows down again at the end.
If you select Bluras the fade type, a Gaussian blur is applied in addition to the normal fade-in and fade-out or cross-fade, the intensity of which you can vary individually.
You can also apply QuickBlending to several crossfades at the same time. In this case, select all the desired blending phases and set the values as described. All selected transitions are then modified in the same way.
Alternatively, m.objects QuickBlending can also be applied to individual images. This is particularly useful for images in overlapping mode such as title overlays and other inserts. For example, you can insert a title with a wipe blend and fade it out again in the same way after the desired stand time.
To do this, right-click on the bar below the light curve and select the Aperture option in the context menu or drag the desired aperture from the tool window directly onto the light curve.
The QuickBlending editing window opens. If you now select one of the options under Blending type, you can also specify directly below whether the selected effects are to be applied to the fade-in and fade-out, or only to the fade-in or only to the fade-out. By default, m.objects specifies fade in and fade out.
If you would like to show a title on the screen using a wipe, simply select the Wipe on one side option and confirm with OK.
During playback, you will notice that the font is faded out in the opposite direction to the reading direction. To change this, activate the Reverse fade-out option in the QuickBlending form. The title will then fade out in the same direction as it fades in.
Here, too, it is of course possible to select several images or titles at the same time and edit them in the same way using the QuickBlending effects.
Especially in travel presentations, it is a popular stylistic device to trace the course of the journey on a country or road map with a line.
To display the route dynamically as an animation, you need two images: Firstly, you need the image of the map as a background and secondly, the course of the route in the form of a cropped line. You can create such a line in any image processing program. Make sure you save the image in a file format that supports transparency information, such as png or tiff. If you work with Photoshop, you can also use the psd format.
In m.objects, arrange the two images so that they lie on top of each other in the image tracks. It is important that the image of the map is below the image with the travel route.
Now double-click on the light curve of the itinerary image to open the properties window. Here, select theOverlapping option under Image blending and the Alpha channel for Transparency (these options are already preselected for png files). m.objects will now interpret the line as a cropped shape.
Then click on Set aperture to access the QuickBlending form. Under Blend type, select the Fill option.
In the canvas, you will see the cut-out line indicated in grey. You now need to define where the starting point of the travel route animation is, i.e. where the journey begins. To do this, simply click on the corresponding point on the line and a preview of the animation will appear in the canvas.
Alternatively, you can also enter the starting point numerically in the form under the options Horizontal starting point andVertical starting point or set it up using the arrow controls. The decisive factor here is that the green point touches the cropped line. As long as this is not the case, there will be no animation.
You can use the blur value to influence the appearance of the front edge of the animated line. As a rule, a value slightly above 0 produces a pleasing result. For this reason, m.objects specifies the value 5.00%by default.
As with the other QuickBlending fade types, you also have the option here of applying the fade to thefade-in and fade-out or only to thefade-in orfade-out. In the case of fade-in and fade-out, use the Reverse fade-out option to ensure that the travel route is removed again in the same direction in which it was traced at the beginning. With the Fade in only option, the route is first faded in animatedly and later faded out again with a standard fade.
If you select a point in the middle of the cropped line as the start for the animation, the animation will take place simultaneously in both directions to the end points of the line.
Once you have entered the desired values, confirm each form with OK. As with the other types of QuickBlending, you can influence the speed of the travel route animation with the length of the fade-in or fade-out of the image. For example, to slow down the animation during the fade-in, lengthen the fade-in phase of the image. If you shorten it, the animation will speed up accordingly.
Under the Filloption, you will find the Tidal wave option under Aperture type, which you can also use for the animation of travel routes.
The name of this aperture type already indicates what happens here: instead of tracing the entire line of the journey in an animation, in this case only a section of the line is shown, which runs from the starting point to the end point.
You can use the blur in the QuickBlending options to influence the length of this section. The higher the value for the blur, the longer the section of the route displayed.
If you select a starting point in the middle of the line, two separate sections will now appear, each moving to the two end points of the line.
The fill andtidal wave fade types, like the other fades, can also be found in the tool window, from where they can simply be dragged onto the corresponding image. Of course, they can be used for more than just the animated display of travel routes. In principle, they work with all cropped shapes, so that - depending on the motif - they offer many creative options for designing overlays.
m.objects masters an intelligent technique for content-based adjustment of images to the aspect ratio of the display, i.e. to the aspect ratio of the canvas or image field. This technology can evaluate image content and automatically decide which part of an image may be distorted and which may not. The method, also known as content-aware scaling, leaves the parts that are important to the image untouched, while less relevant parts - such as the horizon line or a blurred background - are stretched inconspicuously.
In the example image, you can see a photo that deviates from the aspect ratio of the canvas, clearly recognizable by the black bars on the right and left. One option would now be to enlarge the image using a zoom object or the Adjust aspect ratio wizard (you can read more about this in the chapterWizard: Adjust aspect ratio) so that it completely fills the canvas, as shown in the following image.
In this case, however, this results in important parts of the motif being cut off at the top and bottom. Content Aware Scaling takes a different approach here. Double-click on the light curve of the image to open the Edit image window.
Here again, the first item in theimage editing menu is the option Intelligently adjust aspect ratio. Select this with a mouse click and you will now see a slider on the right-hand side with the setting levels 0 to 4. At level 0, the image is not adjusted, while levels 1 to 4 represent different versions of the intelligent adjustment. Basically, levels 1 and 3 only determine vertical lines for spreading, while levels 2 and 4 also process diagonal lines within the image. The appropriate adjustment level therefore depends on the subject in question; if in doubt, compare the results on the screen. Of course, not every image is equally suitable for intelligent spreading.
After clicking OK, you will now see the full-size image in the canvas. The decisive motif area has remained unchanged, while the necessary spreading has been carried out in the remaining image area.
This is a static effect for images; it cannot be applied to video material due to the principle. As this is a very computationally intensive process, it can take a few seconds for a change to the slider to have an effect on the screen. However, once the image (i.e. the texture) has been calculated by m.objects, no more computing power is required during playback.
Incidentally, this technique also works in the opposite direction, so you can also use it to "compress" a suitable panorama into a 16:9 screen or, for example, convert a 3:2 landscape format into a portrait format. Of course, the method works non-destructively, leaving your original images untouched.
The m.objects internal title editor offers you comprehensive options for inserting, formatting and positioning texts in your multivision. The texts are treated in the same way as images and can therefore be freely adjusted and animated with all m.objects tools. m.objects automatically inserts an image field object into the light curve of a text, which can be used to align the text.
Click once with the left mouse button in the image tracks to make them the active component.
Now use the mouse to drag the *Text element macro from the tool window to the desired position in your multivision and drop it onto a free space in an image track.
As texts usually appear in front of a background, it is advisable to use one of the upper image tracks for this. The m.objects title editor then opens and the light curve of the text already appears on the image track.
All entries and changes you make here can be followed directly on the screen.
Alternatively, you can also right-click on an empty space in an image track. In the context menu that pops up, select Insert text element and you will now also have thetitle editor editing window in front of you.
Enter your text in the input field. If the Font preview option is selected, the text is displayed directly in the corresponding font; you can adjust the display size using the + and - buttons.If it is more practical not to edit the text in the original font for fragile or heavily decorated fonts, simply uncheck the Font previewbox .
Then confirm with OKor continue editing the text.
To call up the title editor again later, double-click in the light curve of the text.
The title editor offers you a whole range of formatting options for texts: You can select the text color using the button at the top right, which opens the color picker. Or you can click on the box next toPipette.
Now you can pick up a color for the text directly from the m.objects canvas by clicking on it.
Use the selection fields forfont and font style (italics, bold and the possible combinations thereof) to enter the corresponding formatting. Below this, you will also find options for thespacing of the text and the line spacing for multi-line text.
Entries can be made both numerically in the input fields and using the arrow controls. You can read more about using the arrow controls in m.objects in the chapter Working with the arrow controls.
You can assign thefont, font colorand font style for individual words, across words and even for individual letters and combine them as you wish. To do this, simply select the relevant part of the text and make the desired changes. In this way, you can format any text passages completely freely.
Changing one of these parameters again without the text selected will only change the characters that were not previously formatted separately. If, on the other hand, you select the entire text and make a change, all previously changed settings of this type (e.g. the font color) will be reset.
I n the title editor, you also determine the alignment of the text. This option is particularly useful for multi-line texts and when aligning several texts to each other. As soon as you click on the alignmentvalue, a menu with the available options opens.
In multi-line texts, you can display the individual lines centered to each other or justified.For right- and left-aligned display, you will find the two options (lines) and (in the image field). If you select the value right-aligned (lines) orleft-aligned (lines), the individual lines will be aligned to the right or left of each other. However, the text itself remains centered in the surrounding image field. You can recognize this by the pink frame around the text. If this is not visible, click once on the image field symbol in the light curve.
If you select right-
orleft-aligned (in the image field)
instead, the text will be aligned to the right or left and also
positioned on the right or left edge of the image field.
m.objects inserts an additional zoom object into the light curve of the text, the center of which is aligned accordingly. You don't need to worry about this, however, as m.objects performs this alignment automatically.
If you click on the image field object in the light curve of the text again, the pink frame around the text appears on the canvas. By clicking and moving this frame with the mouse, you can position the text directly at the desired location on the canvas.
Alternatively, double-click on the image field object in the light curve so that the associated properties window opens. This procedure is particularly useful if you want to move the text exactly vertically or horizontally.
You can now also position the text using the arrow controls at the position of the image field or by entering numerical values. In this case, make sure that the checkmark is set in the middle of the linking symbols.
Use the Define line height option to give a line of text a freely adjustable height in relation to the height of the canvas. Simply place a tick next to Define line height and enter the desired percentage value numerically or using the arrow control. Here too, you can see the changes directly in the canvas.
This allows you to always create texts in exactly the same font size by transferring the value for the line height to other texts - for example, for several text blocks in a title credit.
m.objects automatically adjusts the size of the image field surrounding the text. Alternatively, you can change the size of the text and thus the font size directly in the canvas by clicking and dragging the pink frame. This rather intuitive procedure is useful if the font size does not have to correspond exactly to a certain value.
You can of course animate a title created in this way with the dynamic objects of m.objects or distort it with the 3D object. In this way, you can create almost any effect with the title editor.
In our example file, the creation of the title is not limited to the input in the title editor. A 3D object is also used here to tilt the title to match the perspective of the water surface. Furthermore, a copy of the title in track B in black font color and with a slightly offset image field creates an interesting shadow effect.
The m.objects internal title generator can be used creatively and flexibly, especially in conjunction with dynamic objects. In the example, this is done statically, but since a title is handled by the software just like any other image, you can of course also zoom, rotate or move it three-dimensionally.
The m.objects title editor supports the Unicode character set, so that special characters are also available here. This means that titles and texts can be created in almost any language.
If you want to insert special characters, click on the Special characters button. The character table then opens, in which you first select the desired font at the top.
Below you will now see an overview of the available special characters. Select the appropriate one with a mouse click and confirm by clicking on Select andcopy. The special character is now stored in the clipboard.
Now click on the desired position in the title editor and insert the special character via the context menu (right-click and insert) or with the key combination ctrl/ctrl + V.
Note: Please note that the same character set must be selected in the title generator. To insert a single special character from a certain font into a title that is set in a different font, you must first select the desired characters in the Title Generator and then switch the font. m.objects then automatically inserts the corresponding so-called tags in square brackets so that different fonts can be mixed within a title.
If you want to edit or present a mos file with title(s) on several computers, the font used must be installed on each one. If, for example, the output resolution changes and all textures are then recalculated, m.objects must be able to access this font so that the title can be displayed correctly.
The situation is different with EXE files. These already contain the information for displaying the font, so that existing titles are displayed independently of the font installation. This means that nothing stands in the way of a presentation on different computers.
m.objects offers a convenient connection to external image editing programs. Programs that work with a transparent background and can also save this transparency are suitable for this.
If you are not using Photoshop but another software, some steps will differ more or less from the procedure described here.
Create a new image, e.g. in Photoshop. Select a sufficiently high resolution for the text, for example 2,500 x 500 pixels. The color modeshould be set to RGB, 8-bit, and select transparent for the background content.
Now create your text with the Photoshop text tool. Use layer effects, the transform tool or other tools to edit the text and create the desired effects.
When the text is ready, select the Move tool from the Photoshop tool palette. Click in the image and drag the content from Photoshop directly onto the free area of the upper image track within m.objects. If you are working with a screen and the m.objects main window is currently covered by Photoshop, first drag the image over the m.objects icon in the taskbar at the bottom of the screen to bring m.objects to the foreground and from there directly onto the image track without letting go.
m.objects now displays exactly the relevant part of the created title on the image track. The name displayed in the bar below the title is something likePhotoshop_Date_Time.
You can close the image editing program now, as m.objects has already created a file in PNG format from the image content dragged over within the current project folder (in the Pic/Dropped subdirectory). The PNG (Portable Network Graphics) file format has the advantage of containing the transparency information of the image.
The text integrated in m.objects in this way is already overlapped in image blending mode so that it completely overlaps the underlying background image. You can then use an image field object to ensure the appropriate size and positioning of the title.
Note: This chapter describes m.objects' internal static image processing, which is available in all expansion stages of the software. The dynamic image/videoprocessingobject, which is available from the m.objects live expansion level, is even more powerful. A detailed description can be found in the chapter Color grading with image/video processing.
Before you present your photos, you will usually edit or optimize them first. m.objects, on the other hand, has a powerful internal image editing function that you can use to carry out essential post-processing steps. You therefore have the option of inserting image material directly into m.objects and further refining it here. A crucial point here is that m.objects works non-destructively. This means that the program does not change anything in your original files. These remain unchanged so that you do not run the risk of permanently and irretrievably changing your image material through intensive experimentation. The editing steps you carry out in m.objects are only included in the display on the canvas.
Double-click on the light curve of the respective image on the image track to access image editing. At the bottom of the window that now opens, you will find a list of the individual editing options. To the right of this you will see a slider with which you can set the editing value. With the canvas open, you can observe and modify the changes to the image. Depending on the image size and computer performance, it may take a moment for the change to be displayed on the canvas. Click theReset button to return to the original value.
You will find a separate chapter with a detailed descriptionof the option Intelligently adjust aspect ratio :Intelligent adjustment of the aspect ratio.
An important note at this point: As of the m.objects live expansion level, the Image/Video Processing and Blurdynamic objects provide additional options for editing important image parameters and using these changes as dynamic animations. A purely static correction, on the other hand - for example of the brightness or sharpness of an image or a targeted blurring - should be carried out in the image editing window, not with the dynamic objects. The difference lies in the fact that static image editing is already included in the textures of an image before the presentation, whereas corrections and effects with dynamic objects are calculated in real time during the presentation and use the computer's graphics resources accordingly.
You can read more about dynamic image processing in the chapter Color grading with image/video processing.
The options for static image processing:
Brightness
With brightness correction, you can change the overall brightness of the image, which is a simple and effective way to correct underexposure or overexposure. However, make sure that bright areas are not 'washed out' and dark areas do not lose too much detail.
Contrast
For low-contrast images, increase the value here; images with extreme contrasts can be toned down accordingly.
Saturation
Saturation refers to the intensity of the colors. Images that appear dull gain brilliance through optimization, while you can reduce the saturation accordingly if the colors appear too bright. By completely reducing the color information (leftmost slider), you can create a suitable background image for a title insert, for example.
Shade
It is best to try out the effect of the color tone correction on your own photos. By changing the value slightly, you can correct slight color shifts in the image. As soon as you move the slider a little further here, the color tone correction becomes more of an effect filter, with which you can achieve interesting color shifts.
Black point / white point
The example shows a photo that contains neither true black nor true white. Instead, the corresponding areas appear in a washed-out shade of gray. In such a case, move theblack point correction slider so far to the left that you see a true black on the screen. A new black point is assigned to the image, which has a clear effect on the brightness distribution.
Proceed in a similar way to correct the white point. Move the slider so far to the left that you achieve a true white for these areas.
If you correct the black point too much, you will place it where there should not actually be any black. This causes the dark areas of the image to 'run', i.e. they no longer have any detail.
If the white point is corrected too much, the bright areas will be 'washed out'. There is then a lack of detail, which is usually even more disturbing in the result than if the black point is corrected too much.
In contrast to brightness correction, moving the white or black point does not make the entire image brighter or darker. When correcting the black point, the value for the white point is retained and vice versa.
Gamma
While the black and white points refer to the outer areas of brightness, i.e. completely black or completely white, the gamma value of an image refers to the middle brightness areas.
Normalize
If you move the slider to 1, you apply this image correction. Normalizemeans an automatic correction of the image, which is carried out entirely by the program. This function can be used for subjects that have a 'normal' distribution of light and dark areas and may save a lot of time. You should not use it to process subjects such as a sunset, for example, as the software would detect underexposure here and correct the image to be much too bright.
Malfunctions
This function can be used, for example, to remove minor dust contamination from scans. However, the more you move the slider to the left, the more the image loses sharpness. Moderate use of the function is therefore advisable. If, on the other hand, you drag the slider to the right, you add interference patterns to the image, which can be used as an effect if necessary.
Interpolate brightness
This function works in a similar way to the noise filter. Significant differences in brightness in neighboring pixels, which look like image interference such as noise, are balanced out. Contours are retained as far as possible. However, this function also leads to a loss of sharpness and detail when used to a great extent.
Blur / Sharpen
If you drag the slider to the left, you will blur the image, similar to the Gaussian blur. Moving it to the right sharpens the image. Sharpening in particular takes place here in relatively clear gradations and quickly leads to clear artifacts in the image, as is usual with resharpening. If, on the other hand, you want to sharpen in finer gradations, select the next function:
Manual resharpening
Here you sharpen an image in very fine gradations, similar to what you might know from unsharp masking in Photoshop.
In addition to these correction options, m.objects' internal image processing also offers a range of effect filters that are less easy to explain and more worth trying out.
m.objects has professionally integrated color management. The correct color display of digital images on the projector or monitor must follow certain rules throughout the entire processing chain. In simple terms, color management means the ability to correctly interpret color information from an image and pass it on correctly to the projector or monitor.
The ICC (International Color Consortium) has developed a standard for colour profiles (ICC profiles) to enable the various components involved, such as digital cameras or scanners, image editing and presentation software and output devices, to find a common denominator. Different color profiles reflect the different extensions of the color spaces of specific technologies and devices.
Color management is the decisive criterion for the correct color presentation of your photos and therefore an important capability of m.objects. After all, you don't want to see your images on the canvas or screen with roughly matching colors, but exactly as you took them.
First of all, the digital photo itself provides the necessary information. Its colors are stored according to the standards of a specific color space such as sRGB and Adobe RGB. You will encounter these two color spaces quite frequently in photography.
The sRGB color space is a common denominator for many devices and is well suited for presentations such as projection. It is identical to the HDTV RGB aimed at for HDTV devices, and good digital projectors such as the Canon XEED series offer the option of sRGB-compliant projection. However, as sRGB is reduced compared to other common color spaces, it would be unfavorable to work in this way from the start of the processing chain. Filtering in image processing can potentially result in a loss of image information, so it is better to initially create a more differentiated color space. AdobeRGB is very common, and many digital cameras can display images in this color space. Images captured in this way immediately contain the appropriate color profile, enabling all downstream applications to interpret the images correctly.
You should use the 'more' color information of Adobe RGB to take more detailed photos and then present them in the correct color. So if your digital camera can take photos in the Adobe RGB color space, you should definitely select this option.
All editing can therefore be carried out in this color space, preferably with 16 bits per color channel. With its integrated color management, m.objects is able to perform the transformation to the required sRGB color space automatically.
Color management is usually automatic. However, it can be switched off completely within m.objects via the component properties of the projection.
However, the setting made there can be overwritten for each image if desired by selecting Suppress or Force settings in m.objects image editing.
In addition, m.objects has the ability to take into account any correction profile created for the monitor or projector, even with a smooth full-screen display. So if you want to work with exact color accuracy, you should calibrate your monitor or projector in conjunction with the graphics card.
The best devices for color calibration are those that you can place on the screen and measure the displayed color values.
Such devices provide very good calibration results. Pure software solutions, on the other hand, are too inaccurate for a usable calibration
Color Management
therefore performs important tasks without you as the user having to
worry about them. The only thing you need to tell m.objects is
whether the program should use the target color space of the image
for the output device or not. To do this, you will find the
option
Use
the device's target color space (instead of sRGB) in the canvas
settings under Real-time renderer.
As long as the output device is not calibrated, you should not select this option. This could even lead to problems. However, if the device is calibrated, you can use this option to take advantage of all the benefits of Color Management and present your photos in the best possible way.
A few general comments on the general handling of images in m.objects. One of the decisive characteristics of an image is its resolution. It is practically impossible for an image to have too high a resolution for processing in m.objects, as m.objects automatically reduces the image size to the required format. If required, m.objects can also sharpen the scaled image, as every conversion of the size of digital images is initially accompanied by a more or less pronounced loss of sharpness.
For optimum results in digital photography for m.objects:
- Alwaysphotograph at the maximum resolution of the camera. Even if the screen or projector cannot usually reproduce this resolution in full. This ensures maximum flexibility and quality for further editing and arranging.
- If possible,use the RAW format and later convert it into a generally valid file format using the PC software belonging to the camera or a suitable program (e.g. Lightroom). Formats with 16-bit colour depth per colour channel, such as the 16-bit TIFF format, are very suitable. These do not exhibit any visible quantization effects when filtering later on.
- Ifthe camera allows it, you should work in the Adobe RGB color space. It covers a wider spectrum than e.g. sRGB and is therefore better suited for all post-processing steps in which e.g. color and gamma are affected. The necessary conversion to sRGB is carried out at the very end; with m.objects this is done automatically.
- Do not sharpen imageswith image processing, or only very slightly. Sharpening should - if necessary - always be the last processing step, especially after scaling. And since m.objects itself still has to carry out scaling as required, sharpening should also be left to it.
So much for the theory. In practice, there could be at least one reason not to comply with the above rules:
- Themaximum resolution requires more capacity on the memory card and later on the hard disk than smaller images. The lossless compressed RAW format in particular is very demanding. If memory or computing power is scarce, a compromise between quality/flexibility and handling can therefore make sense.
There is a widespread misunderstanding regarding the specification of pixels per inch (dots per inch, dpi). It is completely irrelevant for processing in m.objects. There are no absolute sizes, because the projection image can only be 1 meter or 10 meters wide. And the images can grow dynamically or be embedded in image fields. However, the dpi specification is important for the preparation of print products because, in conjunction with the image resolution, it represents an indication of the absolute size.
m.objects can process practically all common image file formats. If you open the file selection form for image files - for example, by double-clicking in an empty compartment of the lightbox - you will see files of around 20 common types. To also see image files of other formats, selectall files in the File type field.
In most cases, the PNG format is suitable for images with partial transparency (clipping). However, other file formats such as TIFF can also transport transparency information via a separate alpha channel.
In contrast to formats such as bitmap or standard TIFF, JPEG is a lossy compressed file format. Its advantage lies in its relatively small file size. If you create or edit images with external applications such as Adobe Photoshop, you should not select a quality level that is too low (high compression) for saving in JPEG format. It is not possible to make a general statement about the resulting image quality depending on the quality setting. However, with settings from approx. 85% (with Photoshop even from level 10), most images are at a level that is suitable for screen display and projection. If it is unavoidable to save JPEG images in JPEG format again after processing them with an image editing program, a high quality level should always be used, as the compression losses become more visible with each new compression of a previously compressed file.
The more up-to-date JPEG2000 format can also be read by m.objects, but offers no advantages for presentation applications with high quality requirements, as it is only more efficient with strong compression, which should be avoided here anyway.
You can capture compositions worth seeing in the canvas as single images in high resolution and use them, for example, for high-quality prints, flyers or posters, for your website or as a preview image for a YouTube video by simply pressing the [Print] key on your keyboard.
The window for single image export then opens. The values for the canvas resolution are initially specified here under Resolution. You can increase these as required to create a high-resolution image of the canvas content. The prerequisite for this is, of course, that the content used is available in your project in a corresponding resolution.
If you change the aspect ratio, black borders are added to the right and left or top and bottom of the exported image during export due to the deviation from the aspect ratio of the canvas.
The global zoom should always be set to 100% for exporting the canvas content as a single image.
At the bottom of the dialog box, under Frame type, enter the format in which you would like to export the image. You can choose between JPEG, BMP, TIFF, PNG and JPEG 2000 and also specify the compression quality for JPEG and JPEG 2000.
Select the desired format here and confirm with OK. In the following window, specify the directory in which the image is to be saved. By default, m.objects selects the MixDown folder in the m.objects data directory. Click on theSave button to create the single image.
Screenshots, image sections, logos or other image content on the clipboard can be pasted directly into the m.objects timeline.
Would you like to integrate an image, a map or a graphic from an existing document or website into a presentation, or use a screenshot or a current selection from image editing in your m.objects show? All you need to do is have the desired image content in the clipboard, from which you can paste it directly into the m.objects timeline using Paste from the context menu or [Ctrl] + [V]. You can often copy images from websites or documents via the context menu or provide them after selection with [Ctrl] + [C]. m.objects automatically saves the image data in the dropped subfolder of the currently open production.
Animations are the salt in the soup, they bring movement to an AV show, provide excitement and variety, and draw the viewer's attention to crucial details. They turn static images into moving objects.
With zoom, image field, rotation and 3D animation, m.objects provides you with powerful tools that can be used individually or in combination to realize all possible motion sequences and can also be used to adjust or correct individual images.
As of the m.objects liveexpansion level, additional dynamic objects are available with passepartout, shadow / glow, blur, image / video processing, mirroring and speed, which offer exciting possibilities for special effects and animations.
Certain working techniques and options can be found in all m.objects dynamic objects. They should therefore be explained at the beginning to make it easier to understand and use the dynamic objects.
You will find the dynamic objects in the tool window. This window is context-sensitive, i.e. it displays the corresponding tools or objects depending on the selected component (timeline, image tracks, sound tracks, etc.). In this case, the image tracks must be selected; if necessary, simply click on one of the image tracks.
From the tool window, hold down the left mouse button and drag the desired dynamic object onto a light curve and release the mouse button. A corresponding icon is then created on the light curve and the associated editing window opens, in which you can edit the parameters of the dynamic object. You can also deactivate the direct opening of the editing window by deactivating the option Automatically open properties when inserting dynamic objectsunder Settings / Program settings. You can now open the editing window by double-clicking on the icon in the light curve.
For a dynamic motion sequence, or animation, you always use at least two dynamic objects in m.objects, for example two zoom objects. The first forms the starting point of the animation, the second the end point. If you now change the zoom factor of the second zoom object - we will explain in detail how to do this - m.objects automatically creates an animation between these two objects.
The dashed line between the two square symbols indicates that an animation is taking place here. However, if the line is solid, both dynamic objects have identical values, meaning that there is no change and therefore no animation.
By adding further dynamic objects between the first and the last, you can insert intermediate stations into the animation.
The speed of an animation is determined by the distance between the dynamic objects: the shorter the distance, the faster the animation. So if you want to change the animation speed, simply change the distance between the dynamic objects.
There are separate setting options for each dynamic object. However, you will find three of them in (almost) identical form for all of them:
All three options are selected by default. The only difference between them is that the lower option shows the name of the selected dynamic object.
Restrict to current image: If you deselect this option, m.objects includes similar dynamic objects in the same image track before and after the light curve in the animation.
Acceleration/deceleration phase: The m.objects animations start and stop as smooth as butter. This option allows you to influence how long or short the acceleration and deceleration phases last. Especially with longer animations, such as a pan through a panorama, you can achieve a smoother speed in the middle section of the animation by shortening this value. If you select the from option, the animation starts and stops abruptly and runs at a constant speed in between.
Movement from previous ... object: This option ensures that an animation takes place between two dynamic objects. If you deselect this, the change between the two objects takes place without a smooth movement.
The reproduction of high-resolution images with sharply contoured or finely structured motifs can always lead to clearly visible scaling effects such as the development of undesirable geometric patterns (moiré) and flickering - especially in animations such as tracking shots / Ken Burns effects. m.objects reduces these disruptive effects to a minimum without sacrificing sharpness in the display. This technique is advantageous in all applications and is therefore always switched on. Please also make sure that no sharpening is activated in the output device (e.g. TV or projector), as this in turn can cause flickering effects.
The settings for Antiflimmer / Antimoiré can be found under Settings / Screen settings.
m.objects specifies theStandard option here. The gamma-corrected option also performs the static scaling of the images with a specific gamma correction, which scientifically delivers the correct result, but greatly slows down the generation of the textures. In this respect, the recommended setting is standard, as a dynamic correction is already carried out here, effectively preventing flickering and moiré effects.
You can use the zoom to zoom into an image or other object, i.e. to enlarge a section of it instead of the complete view on the screen. The zoom center can also be moved as required.
To assign a zoom object to an image, click on the blue square with the black Z and the label Zoom in the tool window, hold down the mouse button and drag the symbol onto the yellow light curve of the image.
The editing window for the zoom object now opens automatically. If you have deactivated this automatic function or have already closed the window, double-click on the Z icon in the light curve.
The upper value indicates the zoom factor, which is initially set to 100%. If you change this value by dragging the orange arrow upwards, i.e. increasing the zoom factor, the image will zoom in. You can zoom out again by dragging the arrow downwards. Alternatively, you can also enter the zoom factor numerically.
A detailed description of working with the arrow controls can be found in the chapterWorking with the arrow controls.
You will see a green dot in a green circle in the middle of the screen. This dot marks the zoom center.
If you position the mouse pointer over it, it becomes a quadruple arrow. If the zoom factor is greater than 100%, i.e. you have zoomed into the image, you can use it to freely move the image section. You can achieve the same effect by changing the values for the center in the editing window. However, working directly in the canvas is often more intuitive. To be able to move the zoom center in the canvas, the editing window of the zoom object does not need to be open. It is sufficient to click on the Z icon in the light curve beforehand.
At the top of the editing window, you will find the Set to actual image size button. This allows you to enlarge the image so that it exactly matches the selected output resolution of the canvas. One pixel of the image file then corresponds to one canvas pixel when the canvas is displayed in full screen. This is therefore the largest possible loss-free display of the image. You can enlarge it even further, but then the screen has to interpolate additional pixels, which impairs the image quality. It therefore makes sense to integrate images in their original resolution in m.objects in order to have sufficient resolution reserve for zoom effects. Downsampling beforehand is therefore counterproductive.
At the top of the editing window you will also find the option Fill at 100% image field.
If you select this option, the zoom factor 100% is interpreted in a different way, namely in relation to the canvas or - if the image is displayed at a reduced size using an image field in the canvas - in relation to the image field. The value 100% now means that the image is scaled exactly so that it always fills the screen or the image field completely, even if it deviates from the aspect ratio of the screen or the image field. The zoom factor therefore automatically adapts to the current dimensions and changes in the image field. You can read more about working with image fields in the following chapter.
Up to this point, you have become familiar with the basic functions and options of the zoom object. Things get really exciting when several zoom objects are used and set static images in motion.
Drag a second zoom object onto the light curve. In the following example image, the first zoom object is positioned at the beginning and the second at the end of the image stand time.
For the second zoom object, select a significantly higher zoom factor and move the image section if necessary. Confirm the changes with OK and start the locator just before the light curve. The view on the screen shows a zoom movement through the image. On the light curve, you can see that the line between the two zoom objects is now curved and dashed. There is therefore movement between the zoom objects.
The steeper the curve, the greater the change in the zoom factor. The distance between the zoom objects determines how quickly the animation runs.
The description of the dynamic settings in the lower area of the editing window can be found in the Dynamics options chapter.
You can now insert additional zoom objects into the existing animation and select different image sections, for example. As a rule, it makes sense to extend the image's standstill time accordingly so that the movements do not take place too quickly. You will soon realize that the zoom object alone gives you many possibilities for exciting dynamic effects.
The image field object allows you to limit the display of an image or other object to a part of the screen. For example, you can place a reduced image in front of a background image or divide the screen into several areas in which you can then show images, zooms or videos. Another particularly interesting feature is the option of displaying titles or even creating complete credits at the end of the presentation.
If image field objects are used to position images
within the canvas (picture-in-picture), the overlapping mode is
practically always required in the image mix. For this reason, the
system automatically switches to overlapping mode when an image field
object is created. However, each image can of course still be
switched back to additive mode if necessary. You can read more about
image blending in the Image blending
chapter.
The image field object can be found in the tool window when the image tracks are activated: the green square with the diagonal double arrow and the image field label.
To use, drag the image field object onto the desired light curve. The properties window that now opens (if necessary, double-click on the icon in the light curve to open it) offers you the options for setting the size and position of the image field. As with the zoom object, you can also make the settings directly in the canvas.
To do this, the image field must be activated; if necessary, click on it with the mouse. You will now see a pink frame around the outer edge of the image in the canvas. Grab a corner of this frame with the left mouse button. The mouse pointer will turn into a diagonal double arrow. Hold down the mouse button and move the frame inwards a little. You will see that the picture becomes smaller and no longer fills the entire canvas.
If you now release the mouse button and move the mouse pointer further onto the image, it changes into a quadruple arrow with which you can move the image itself.
If you hold down the Shift key while moving with the mouse, it is only possible to move either vertically or horizontally. If you hold down the Shift key while you change the image field at one of the handles, the size changes proportionally, i.e. the aspect ratio of the image field is retained.
By reducing and enlarging the image field frame and moving the image itself, you can now fine-tune it until the result meets your expectations.
In the properties window of the image field object, you will first notice the four fields with the values for the position of the image field and the corresponding orange arrows. You can make changes here using the arrow controls or enter specific numerical values. This is particularly useful if an image field is to be placed in an exact position or if several image fields of exactly the same size are to divide the canvas into equal parts.
The four values under Position of the image field are to be understood as follows: The top and bottom values indicate the distance of the top and bottom edges of the image field from the top edge of the screen. Similarly, the left and right values indicate the distance of the left and right edges of the image field from the left edge of the screen.
You will also notice that if you change one value, the opposite value also changes. As long as the tick in the middle between the linking symbols is set, the four values are dependent on each other. If you remove the tick, you can change the values individually and thus also redefine the size of the image field.
This allows you to divide the screen into four areas of exactly the same size, for example.
The diagram shows the settings in the image fields according to the arrangement of the images on the screen:
If an image deviates from the aspect ratio of its image field, insert a zoom object into the light curve and increase the zoom factor until the image completely fills its image field. There is also the option Allow distortion of content. In this case, an image is distorted until it corresponds to the aspect ratio of the image field. This is usually not desired, but there may be cases in which distortion is a desired effect or the deviation is so slight that the distortion is not noticeable.
The arrow controls for width,height and size, which you can see on the right-hand side in the editing window of the image field object, cause the image field to be enlarged or reduced accordingly from its center in both directions.
Thestereo planeoption only applies to stereoscopic, i.e. 3D presentations, and allows you to easily move and change the stereo plane. A detailed description of this function can be found in the chapter Stereoscopy with m.objects.
Use the R button to reset all values for the position of the image field and, if necessary, the stereo level to the original values.
If you are not using numerical values to position the image fields in the canvas, you can also create guides here, which make alignment much easier. To do this, right-click in the canvas and select Insert guide lines. You now have the option of creating a vertical or horizontal guide. Click on the desired selection and the guide will appear in the middle of the canvas. Use the mouse to move the line to the desired position. Add further lines as required. In the context menu of the canvas, you will also find the option Magnetic guides, which you can use to ensure that image fields automatically 'snap' to a guide when they are moved as soon as they come close to it. To hide or show guides again, select the Show guides and image fields option. To delete a guide again, simply move it to the edge of the canvas until it is no longer visible.
The other options for the image field object:
Adopt position from previous image field: The image field is adapted to the previous image field in the corresponding image track. This option is useful if an exact match is required.
Set image field to actual image size: As with the zoom object options, you can set the image here so that its resolution matches the canvas resolution.
As with the zoom, you can also create an animation for the image field object by using two or more objects.
For example, if you want an image or text to run across the screen from bottom to top, i.e. in the style of film credits, create an initial image field object in the relevant light curve. A text that you have created with the title editor already has an image field object at the beginning of its lifetime.
Enter 100% for the upper value for the position of the image field. Make sure that the check mark between the linking symbols is set. Reminder: This value marks the distance between the top edge of the image field and the top edge of the canvas. This means that the image field, and therefore the text, is now exactly below the canvas and cannot be seen.
Then insert a second image field object at the end of the stand time. This initially adopts the values of the first image field object. Enter 0% for the lower value for the position of the image field. This means that the image field is positioned exactly above the screen and cannot be seen.
A movement sequence now takes place between the two objects, the text moves from bottom to top. During playback, however, you will notice that the speed of movement is not uniform. In this case, you should therefore deactivate the acceleration/deceleration phaseoption. The speed itself can be influenced by the distance between the two image field objects.
You can now insert additional text or images to complete the credits. The distance between the light curves in the picture tracks determines the chronological sequence of the credits. If the images were all at the same point in the timeline, all the lines of the credits would fade in at the same time and run upwards simultaneously.
Further information can be found in the chapter Creating texts with the title editor.
You can use the rotation object to rotate an image around a specific point. You can use this effect statically, for example to straighten a skewed horizon line, but you can also use it dynamically by creating a rotational movement.
The rotation object can be found in the tool window as a red icon with a white R and the label Rotation when the image tracks are activated.
The example of a clock pendulum is intended to illustrate the animation possibilities with the rotation object. The image of a clock face is located in the upper image track, while the pendulum is located in the track below as a separate image. The third image track only creates the white background.
This division is necessary in order to be able to animate the pendulum separately. The aim is to create a uniform pendulum movement that is repeated over the duration of the image.
To do this, first insert a rotation object into the light curve of the pendulum and position it just behind the fade-in. In the editing window (open by double-clicking on the R icon in the light curve if necessary), you can set the efficiency of the object.
Increasing the value causes the pendulum to rotate clockwise and decreasing it causes it to rotate counterclockwise. You can change the value using the arrow control or enter it numerically. Clicking on the upper button labeled Reset to default value causes the pendulum to return to its original position, i.e. the rotation value is reset to 0°.
The center of rotation is now still in the middle of the pendulum string. For the desired movement, however, it must be at the upper edge in order to create a realistic movement here, so that the pendulum swings under the clock in the usual way.
The center of rotation can be seen on the screen as a green circle with a green center. If you drag the mouse over it, the mouse pointer changes to a quadruple arrow.
Now move the center of rotation upwards with the mouse until it is just above the lower edge of the clock face. If you now change the rotation value again, you will see that the pendulum movement is correct.
The rotation is now set to 20° so that the pendulum points to the left. To make it swing to the right, you need a second rotation object, which you again drag from the tool window onto the light curve and place to the right of the first object. The second rotation object automatically adopts the settings of the first, so you do not need to adjust the center of rotation again. Simply change the rotation value to -20° so that the pendulum swings to the right at the same height. The distance between the two rotation objects determines how fast or slow the pendulum movement is.
Only a few simple steps are needed to achieve a smooth back and forth movement:
First select both rotation objects, right-click on one of them and select Wizards / Autoshow, multiple copy of objects from the context menu.
With Autoshow, the selected animation is copied several times - you enter how many times in the dialog box - and can then be added to the existing animation. For example, enter the value 7 here and confirm with OK. The copied animations are then 'attached' to the mouse pointer. Now position the mouse pointer behind the second rotation object so that the distance is the same as between the two existing objects. As soon as you click with the left mouse button, the copied rotation objects are placed on the light track. A message on the screen confirms the successful copying process.
This achieves the goal: the clock pendulum swings back and forth to the time the picture is displayed. It may now be necessary to make a slight correction to the service life.
The rotation object can also be used to make image corrections, for example if the horizon line in a photo is skewed. In such a case, drag a rotation object onto the light curve and change the rotation value with the slider until the horizon is straight. Then drag another image field object onto the light curve and enlarge it until the photo fills the entire canvas again and the black corners have disappeared. The arrow controls, which you can find in the properties window of the image field object, are particularly practical for this. Simply drag with the mouse to enlarge the image field until the image completely fills the canvas again.
Note: You can also use the 3D object to easily create multiple rotations. For example, if you want to rotate an object several times around its own axis, you can create such an animation with the 3D object with just a few mouse clicks. You can read more about this in the following chapter 3D object.
The 3D object is probably the most complex dynamic object in m.objects. It enables movements on all three spatial axes and allows settings in all decisive parameters. Spatial motion sequences can therefore be displayed extremely realistically. The 3D object is also used specifically in stereoscopy. You can find out more about this topic in the chapter Stereoscopy with m.objects.
To use it, drag the small orange square labeled 3D animation from the tool window onto the relevant light curve when the image tracks are active.
The corresponding properties window opens or is opened by double-clicking on the icon in the light curve.
The most important parameters of the 3D object can be found at the top left under Rotation angle: The values for X, Y and Z determine the orientation of an object in space.
You can change these values using the arrow controls. As soon as you drag over the controls with the mouse button held down, you can see how the position of the image changes in the canvas. You can also change the values for X and Y together using the double arrow control, which makes working even more intuitive.
You can of course also enter numerical values in the fields next to the arrow controls.
X represents - as in a coordinate system - the axis that runs horizontally through an image. Y is the vertical axis of the image and Z is the axis that is perpendicular to the image. If X and Y both have the value 0, the Z axis points directly out of the image towards the viewer. As soon as you enter different values for X and Y, the position of the Z axis in space also changes.
To understand: The display of an image is limited by its image field. If you have not dragged a separate image field object onto the light curve, the image field is initially as large as the canvas window. The 3D object now changes the position of the image field. For example, you can use the value 45° for the Y-axis to ensure that the image field and therefore the image is angled in space.
Like all other m.objects dynamic objects, you can use the 3D object for both static corrections and dynamic effects. At least two objects are required for the latter, which represent the start and end points of an animation.
The values for the center of rotation can be found at the top right of the editing window. These primarily have an effect in dynamic applications. For example, if you want to animate an image so that it swings to the side like a door, set the X value here to 0 for both the start and end of the movement. Under Rotation angle, set the Y value to -110°, for example. During the animation, the image now swings back to the left. The imaginary door hinge is therefore on the left-hand side.
Accordingly, the center of a three-dimensional movement can also be shifted on the Y and Z axes. The following image shows a lettering that rotates spatially around an imaginary point, whereby this point has been moved backwards on the Z-axis. The letters are animated individually and are each on their own image track.
The crucial point here is that the X, Y and Z axes always refer to the image field, not the screen window.
For most applications of the 3D object, it is sufficient to make settings for the rotation angle andcenter of rotation. The other options that you will find in the editing window of this dynamic object are only used in special applications.
The values for the camera positionallow you to correct the perspective. However, this requires that at least one of the rotation angles X or Y is not set to 0. If you increase the value for X for camera position, you are effectively moving the position of the camera - i.e. the shooting position - to the right; by decreasing it, you are moving the camera to the left. Similarly, you can use Y to adjust the camera position up and down. Only slight corrections are recommended here, as the image only contains a certain perspective. Strong corrections therefore quickly look unnatural.
If you insert an image field object into the light curve and move it to the right and left or up and down, the viewing angle of the image changes. The best way to understand this is to change the value for the Y-axis or X-axis underRotation angle as described. However, if you now tick the optionrelated to image field, the viewing angle remains unchanged when the image field is moved. Without the checkmark, you will generally achieve a more realistic effect, as the perspective of the image will be adapted to the position of the image field in the overall scene.
The Distanceparameter describes the distance of the object from the viewer. The default value for this is 100%. If you increase this value, the object, for example a picture, moves backwards, i.e. becomes smaller. At 200%, the distance therefore doubles. At values below 100%, the object moves forward and becomes larger. In contrast to the zoom object, which enlarges or reduces an object on the canvas level until it is no longer visible, here it is moved back and forth in the room and remains visible even at a very large distance.
The image angle also influences the focal length. At values below 100%, a wide-angle effect is created when viewing the screen, whereas a higher value is equivalent to a telephoto lens. At 100%, it corresponds to a normal focal length. You can therefore use the image angle to exaggerate or attenuate the perspective effect of a three-dimensionally aligned object. A doubling of the value corresponds to a doubling of the distance from the center of rotation with a simultaneous doubling of the focal length. This also assumes that at least one of the rotation angles X or Y is not set to 0.
The following instructions are only relevant in very specific applications. The corresponding settings should be reserved for experienced users:
In the editing window of the 3D object, you will see information on the order of application before the X, Y and Z rotation angles. Normally, you should not deviate from the default settings, as shown in the following image.
With animations, there may be cases in which, for example, a rotation should not take place around an object-related Z-axis, but around a scene-related Z-axis. In this case, click on the buttons in front of X, Y or Z to change the sequence.
This dynamics tool is available from the m.objects live expansion level.
You can use the passepartout object to create passepartouts around images, texts and videos, whereby the size, transparency and color of the frame can be changed as required. This tool is particularly suitable for setting images against a background or adding a color-filled frame to texts so that they stand out clearly. You will see some examples of applications of the passepartout object below.
Like all other dynamic tools, the passepartout can be used both statically and for animations. For example, you can enlarge a frame around an image in a dynamic motion sequence.
The passepartout object is used according to the familiar m.objects principle: With the image tracks activated, select the pale red icon with the black F in the tool window and drag it onto the light curve of the image you want to edit while holding down the mouse button. As soon as you release the mouse button, such an icon appears on the light curve.
In our example, you can see an image that is reduced in size against a background. This image is to be framed so that it stands out better against the background. As soon as you have placed a passe-partout object on a light curve, m.objects creates a predefined frame, as a glance at the canvas in our example shows.
Once you have closed the editing window for the passepartout object after inserting it, open it again by double-clicking on the new icon in the light curve.
First, let's take a look at the effect settings on the left-hand side of the window. Use the top slider to change the opacity, i.e. the transparency of the frame. Values from 0% (transparent) to 100% (full opacity) are possible here. Use the arrow control (more on this in the chapter Working with the arrow controls) to drag up and down to increase or decrease the value. As usual, you can also enter a value directly in the field.
Below this, you will find the setting options for the width of the frame. Horizontal and vertical dimensions can be entered separately. For example, you can add a background color to a text across the entire width of the canvas, while the height of the background remains limited to the height of the text.
The values for the frame width can also be changed using the arrow controls or by entering numerical values.
Click on the color field below to open the color selector to define a color for the passe-partout. here you can click directly with the mouse in one of the fields and move the mouse until you have found the right color. The preview on the right-hand side shows the selected color. Alternatively, you can also enter numerical values here. To reset the color selector to white if required, simply click on the Set to pure white button at the top right. Then confirm with OK.
Alternatively, you can also select a color directly from the canvas using the pipette. To do this, tick the eyedropper option next to the color field. Now click on a spot in the canvas to pick up the desired color.
Below the color selection you will find three more options in the editing window:
This gives you the option of showing only the passe-partout without the actual picture. This allows you to display colored areas in the canvas for design purposes, for example.
The option Passepartout without special effects refers to the other dynamic objectsShadow / Glow and Blur. You have the option of applying shadow effects or blurring to the passe-partout by using one or both of these dynamic objects. However, if you select the option Passepartout without special effects, the passepartout is excluded from the effect, i.e. displayed without shadows or blurring.
The Scale effect to display size option allows you to increase or decrease the width of the frame proportionally when the image is enlarged or reduced. If this option is not selected, the frame around the image retains its width.
As with other dynamic effects in m.objects, you can create an animation by using two or more passe-partout objects. So we drag a second passe-partout object from the tool window onto the rear area of the light curve.
The opacity as well as the horizontal and vertical width can be changed dynamically. Double-click on the icon again to open the editing window for the second passepartout and change the values accordingly:
In this example, the opacity has been reduced and the width of the frame has been increased in all directions. Click OK to apply these changes.
If you now run the locator over the light curve, the change in the frame appears as a dynamic animation. You will find the dynamic options on the right-hand side of the editing window. You can find out more about this in the Dynamics options section.
This dynamics tool is available from the m.objects live expansion level.
As the name suggests, the shadow / glow tool adds shadows and glow effects to objects on the image tracks. It can be used in particular for picture-in-picture constructions, texts and cropped objects of any shape to effectively set them off from the background. All decisive values can be set individually.
The shadow / illusion object can be found in the tool window as usual when the image tracks are activated. You can recognize it by the grey square with a black S.
Hold down the mouse button and drag the object from the tool window onto a light curve. As soon as you release the mouse button, a corresponding icon appears on the light curve.
You can see that the text we want to use here as an example already has a shadow.
m.objects therefore specifies a default value. After inserting, the editing window will open automatically. If you have closed it, open it again by double-clicking on the icon.
You will find the effect settings on the left-hand side of the form. The first value relates to the opacity of the shadow, which you can change from 0% (completely transparent) to 100% (full opacity). This is followed by the option for blurring the shadow. The higher this value is, the more blurred it will be, whereas a value of 0 gives the shadow a completely sharp edge.
You can read more about using the arrow controls in the chapter Working with the arrow controls.
Opacity and blur also offer the option Extend area. If you select this option, the effect of the two values on the shadow is increased, making it larger. It is best to try out the effect of this option, you will see the effect immediately in the canvas.
Angle and distance determine the alignment of the shadow / glow and its distance from the object. Here, too, you can check all the changes you enter in the form directly in the canvas and thus adjust the effect accordingly.
By default, m.objects initially creates a black shadow. However, you can change this in several ways. In the input form, you will find theColor mode option with the Color default entry. In this setting, click on the Color button below and you will be taken to the familiar color picker.
Select a color here and confirm with OK. The button then takes on the selected color and the shadow / glow is colored accordingly.
You can also select a color directly from the canvas using the eyedropper. To do this, tick the eyedropper option next to the color field. To reset the color picker to white if required, simply click on the Set to pure white button in the top right-hand corner.
Click on a spot on the canvas to pick up the desired color.
Now click on the drop-down menu under Color mode.
You will see two further options, Image and Image negative. With these options, the shadow / glow is filled with the image itself or with the negative of the image. As long as you use a monochrome font, as in our example, the shadow will be filled with the same color, in this case white. If you select negative image instead, the shadow here will be black. The color button below is crossed out, so it cannot be used in this setting.
Under the color settings, you will see the Contour shadow option. This means that the shadow / glow effect is only displayed as an outline, i.e. it virtually traces the outline of the object.
However, the real highlight of this dynamic tool now follows under theAppearance of the shadow/appearance setting. By default, m.objects gives you the selection Shadow outwardsat this point. The effect therefore appears as a 'normal' shadow. However, the tool can do a lot more. You can see this if you expand the selection list here:
You can choose which of the 23 shadow effects suits you best. And ultimately there is only one thing to do: try it out.
Below the selection box, you will also find the option Scale effect to display size. This allows you to make the actual object - in our example, the text - larger or smaller and to increase or decrease the shadow effect in relation to it. If the option is not selected, the strength of the effect is not adjusted.
The following images show a selection of the available shadow effects.
1.1 Shadow outside
1.2 Shadow inside
1.3 Sham inside
1.4 Shadow in front of object
1.5 Shadow + shine
4.1 Shadow outside (Obj. transp.)
fx.3 shadow black + glow
The effect settings opacity, blur, angle, distance and even the color can be animated for the shadow / illusion object, i.e. they can be changed in a flowing motion sequence. The settings for the dynamic effects can be found in the editing window on the right-hand side. A detailed description can be found in the chapter Dynamic options.
The blur tool is available from the m.objects live expansion level.
This dynamic tool offers exciting possibilities for creative play with sharpness and blurring. It uses the Gaussian blur, which ensures a particularly high display quality. You can use the blur object to dynamically change images, texts and videos from sharp to blurred and, of course, in the opposite direction.
You will find the blur object in the tool window as a blue square with a white B when the image tracks are activated.
Hold down the mouse button, drag the object onto a light curve and release the mouse button. A corresponding icon appears in the light curve.
In the following image you can see a scene from the Venice Carnival, which is made up of two photos: a background image with the building and another photo with two people who were previously cut out of another image. Below you can see the constellation on the image tracks.
An impressive animation can be created here with little effort. In a first step, we insert a blur object into the front area of the light curve of the background image and another one into the rear area.
Double-click on the second blur object in the light curve to open the corresponding editing window.
As you can see, there are not too many values to set here. Working with the blur object is really very simple. Essentially, you control the strength of the blur using the upper value. You can read more about using the arrow controls in the chapter Working with the arrow controls.
Below this, you will find two more options: Firstly, you can scale the effect to the display size so that the strength of the blur effect is also increased or decreased when the image is enlarged or reduced. There is also the Preserve edges (alpha) option. It offers the option of retaining a sharp image edge despite the blurring of the image or a sharp contour edge for cropped shapes.
In our example, we change the blur value to around 50, which leads to a clear blurring of the background image and confirm with OK. This creates an initial animation that changes the building in the background from sharp to blurred.
The next step is very simple. We also insert two blur objects into the image with the cropped people, which lie exactly above the blur objects of the background image.
Here, the animation should run in the opposite direction. We therefore change the blur value for the first object to around 50, while it remains at 0 for the second object. Our animation is now complete: the two people appear in the foreground while the building in the background becomes increasingly blurred. You can easily change the speed of this animation by adjusting the distance between the blur objects.
The settings for the dynamic effects can be found at the bottom of the object's editing window. A detailed description can be found in the Dynamics options chapter.
The dynamic object image/video processing is available from the m.objects live expansion level. A detailed description can be found in the chapter Color grading with image/video processing.
The mirroringdynamic object is available from the m.objects live expansion stage.
This object can be used to easily create reflections of images, videos and texts that follow the perspective exactly, even with movements such as 3D animations. To apply the dynamic object, drag the gray icon with the letter M and the label Mirroringfrom the tool window (with image tracks activated) onto the light curve of the desired image, text or video.
A corresponding icon will then appear on the light curve and the associated properties window will open. You can open this window at a later time by double-clicking on the icon in the light curve.
If you want to mirror an image or video, there must of course be enough space on the screen for the mirroring. You will therefore reduce the size of the image or video with an image field beforehand. The mirroring automatically adjusts to the set size. Texts that you create with the title editor are already provided with an image field; here too, the size can of course be adjusted as required
The opacity of the mirroring is already reduced to 50% by default in order to achieve a realistic mirroring effect. You can now use the arrow control or a numerical input next to it to change the opacity value as required.
If you check the Transparency gradient option below, you can let the mirrored representation of an object fade into transparency. As a result, the mirroring becomes weaker and weaker as the distance to the mirrored object increases, allowing you to achieve even more realistic effects. Use the value for the transparency gradient to set its extent: The higher this value is, the larger the visible area of the reflection, i.e. the smaller the effect of the gradient.
Below this you will find the value for the distance between the original and its reflection, which you can of course also adjust as required.
Like all dynamic objects in m.objects, the Reflection object can be used both statically and dynamically. So if you insert several mirroring objects into a light curve and set different values for each of them in the properties window, you will get a dynamic motion sequence.
Mirroring can be used particularly effectively in conjunction with other special effects. You will find additional options in the lower part of the properties window that you can use to define how these other effects work. For example, by inserting a blur object, you can ensure that a reflection on a water surface, for example, is blurred to a certain extent and therefore looks even more realistic, while the original image remains unaffected by the blur. To do this, select the Reflection only value under the Blur option.
Similarly, for the special effects shadow/appearance and image/video processing, you will also find the values overall display, with which the effect affects the original and mirroring, as well as mirroring only andoriginal only, whereby the effect is limited to the mirroring or the original.
The settings for dynamic effects with the mirroring object can be found in the properties window on the right-hand side. A detailed description can be found in the Dynamics options chapter.
Thespeed/pitch dynamic object is available from the m.objects live expansion stage.
In contrast to the other dynamic objects in m.objects, you can use theSpeed/Pitch object both in the Projection component, i.e. in the image tracks, and in the Digital Audio component, i.e. in the sound tracks. By clicking in the respective component, the symbol of the speed object, the letter T (for timebase) against a yellow background, appears in the tool window.
Within the picture tracks, you use this dynamics object to influence the playback speed of video sequences. In the sound tracks, it is applied to sound samples accordingly.
The application is the same as for all other dynamic objects: Hold down the left mouse button and drag the T symbol from the tool window onto the light curve of a video or onto the envelope curve of a sound sample and release the mouse button. This opens the properties window in which you can make the desired changes. Later, open the properties window by double-clicking on the symbol in the light curve or sound envelope.
By default, a video clip that you have saved on an image track is played back at nominal speed. This is usually the speed at which the scene actually took place. You can now use the Speed object to slow down (slow motion) or speed up (fast motion) playback.
To do this, drag a T-object from the tool window onto the light curve of the video. Now enter the value 200% for the speed in the properties and confirm with OK. You have now created a time-lapse effect in which the video is played back at double speed. If you enter 50%instead, you create a slow-motion effect in which the video only runs at half speed.
After clicking OK, the Change duration window appears, which offers you the option of adjusting the video's duration on the timeline to the changed timing.
This is because a video that runs at twice the speed only needs half the length on the timeline. In the opposite case of slow motion, the time required is extended accordingly. If you confirm the window with Yes, m.objects shortens or extends the light curve of the video to the exact length actually required.
If the Remember response for this session checkbox is selected, m.objects will automatically make the necessary adjustments when the timing is changed again as long as the current m.objects show is open. This applies both when changing T-values and when moving, duplicating or deleting corresponding objects or when pasting via the clipboard.
However, if you click No in the Change duration window, the light curve of the video initially remains unchanged. By manually lengthening or shortening the light curve, you can adjust the timing later if necessary. In this case, a vertical red line shows you the exact limit of the time required on the timeline.
Speed changes can also be displayed dynamically. One application for this is, for example, the start of a video scene with a still image that only starts to move at a later point in time.
To do this, drag a first T-object from the tool window onto the light curve of the video and place it at the point up to which the freeze phase should last. In the properties window, the value 100% is initially entered for the speed, which corresponds to the nominal speed of the video. To create a freeze frame, enter the value 0% and confirm with OK.
Place another T object just behind it, i.e. to the right of it, and click on the R button in the properties to display the value 100% again. If you now run the locator over the video, you will see that the still image becomes a moving video. The duration of the animation from still image to moving image is determined by the distance between the two T-objects.
Of course, the animation can also be designed in the other direction, if a scene is slowed down from a running video to a still image to allow it to be viewed in detail. In this case, change the speed from 100% in the first T-object to 0% in the following T-object.
As with all m.objects dynamic objects, you can of course also use any number of T-objects to dynamically change the playback speed. For example, you can use additional objects to return to the normal speed after a dynamic slowdown in order to accentuate a particular section of the video.
Fast decelerations to a still image or accelerations from a still image are possible with almost any video clip without any problems and in an appealing quality.
Of course, you can also reduce the playback speed without slowing down to a still image. Whether such slow motion delivers an appealing quality depends on the characteristics of the video clip used, primarily on the frame rate of the recording.
For example, if the video was only created at 30 fps, reducing the playback speed to 50% will result in a frame rate of only 15 fps. This can lead to visibly jerky playback. Whether this is the case depends heavily on the content. In general, playback of less than 20 frames/s can be perceived as visually disturbing.
However,the frame blending automatically performed by m.objects to improve smooth playback (see chapter Smoothing the playback of video clips with unsuitable frame rates) significantly alleviates this potential problem. The following two content-related considerations can also allow lower frame rates:
- Content/contours: If the video shows softly limited and low-contrast contours in the moving parts of the image (e.g. clouds or clouds of fog, but also objects with motion blur), it is often possible to go below 20 frames/s without disturbing jerking of the scene.
- Content/movement: If sharply contoured objects within the scene only move slowly, slow motion at less than 20 fps can often be set up well.
To put it the other way round using a practical example: If a dark bird, shown in perfect focus against a bright background, attacks another bird in a daring flight maneuver and the camera was not pulled along with the fast movement during the recording, a strong slow motion of this scene will presumably only work well if a corresponding number of frames/s were recorded during the recording (e.g. 120fps or more). When recording scenes of which slow motion is planned later, you should therefore ensure that the frame rate is as high as possible - possibly at the expense of resolution. Many cameras allow considerably higher frame rates at a lower resolution.
Many cameras record at a correspondingly high number of frames/s (e.g. 240fps) in high-speed mode, but the video stored on the memory card is marked with a low frame rate (e.g. 30fps). In normal player software, a video of this type runs as 8x slow motion by default. You can also process such a video dynamically in m.objects: By accelerating it to 800%, it runs at nominal speed, while dynamically reducing the speed down to 100% smoothly takes it to a technically good slow motion.
With regard to hardware support when decoding videos (see chapter Hardware support for video decoding), please note that your graphics card may not be able to decode 240 frames/s. It may be useful to explicitly deactivate hardware decoding for a corresponding video by unchecking Use graphics hardware for decoding if possible in its properties (double-click in the video's light curve) . This in turn places a greater load on the CPU. If this is also not capable of playing back the full frame rate, it is advisable to export the video already processed with the corresponding T-dynamic objects as a new video with e.g. 60 frames/s (see chapter Exporting video) and then integrate the new video into the image tracks.
If you extract sound from a video and save it on an audio track (see chapterWizard: Separate video sound on audio track), the extracted sound takes over any speed objects in the video. You can also remove the speed objects from the sound sample. To do this, you must first ungroup the video sound from the image track using the Ungroup event group icon in the toolbar.
However, if the sound of a video has not been extracted into the audio tracks, it is played back together with the video from the picture track. Accordingly, changes to the playback speed of the video also affect the video sound and, in particular, the pitch.
With atmospheric sounds, changing the pitch in this way can be a desirable effect. By selecting the option Set pitch / Time stretching, the original sound contained in the video can be played back at the appropriate speed without changing the pitch. This is often important if the audible and visible events in a video have a recognizable connection and must therefore remain synchronized. This even works with dynamic speed control.
A detailed explanation of how this works can be found later in this chapter under Time stretching: adjusting playback speed and pitch independently of each other.
The playback speed of a sound sample can be influenced in a similar way to a video sequence. If you drag a single T-object onto a sound envelope, you change the speed of the entire sample with the speed value: 100% corresponds to the nominal value, i.e. the normal playback speed, a lower value results in slower playback, a higher value results in faster playback.
As with videos, m.objects also offers to adapt the object, in this case the length of the sound envelope, to the changed timing in the case of sound files.
The procedure is similar to that for videos; you can read more about this in the chapterAdjusting the timing.
Atmospheric original sounds, for example from nature or noises such as the sound of a bell, can be well suited for this. By changing the speed, you can achieve a different sound and therefore in many cases a completely new effect, for example to add a particularly impressive or dramatic tone to scenes. Instrumental music can also be suitable for this purpose, although a value of 0% does not work here, as in this case no sound is output.
Speech and vocals initially appear unnatural when the playback speed is changed due to the altered pitch.
This is why you will find the option Set pitch / Time stretching in the properties window of the Playback speed / Pitch dynamic object. This enables dynamic speed control of audio clips without changing the pitch. Conversely, it is also possible to use this function to change the pitch without affecting the playback speed.
To do this, activate theSet pitch checkbox. You can now increase or decrease theSpeed value at the top within a relatively wide range, i.e. increase or slow down the playback speed of the audio clip without voices, musical instruments or original sounds sounding unnatural. How wide this range is ultimately depends on the specific content. This can be used to adjust the playing time of sound files to image or video sequences as well as to change the tempo within pieces of music and can even be used dynamically. You can also increase the efficiency of this function with the Increase precision option. Please note, however, that this may increase the load on the computer processor. This applies in particular to older devices or computers that are primarily suitable for office applications. For current multimedia computers, however, this does not play a significant role.
You can also vary the pitch without affecting the speed. To do this, enter a corresponding value in the Shift field, either numerically or using the arrow control. The scale for this is in cents, with 100 cents corresponding to a semitone interval (equal temperament). Depending on the application - especially for vocal parts such as speech or singing - it may be useful to also activate the Preserve timbre option to maintain a natural sound. This function is ideal for matching adjacent or overlapping pieces of music in different keys to create a harmonious transition. It is also very useful for changing the vocal pitch of speakers.
With the audio mini-player on the right-hand side of the form, you can follow all the changes you make here in real time and check the effect immediately.
Masks are an important tool when working creatively with m.objects. They offer countless possibilities for implementing ideas and are easy to use.
The basic principle of an image mask is very simple: it hides parts of an image or - as an inverse mask - makes certain parts of an image visible. It can have practically any shape, from a rectangle to a cropped object to text.
You can also use the dynamic tools to animate masks in every conceivable way, for example by dynamically enlarging them, moving them across an image or rotating them.
Note:You can create image transitions in simple geometric shapes or in the form of wipe transitions with just a few mouse clicks using QuickBlending. You can read more about this in the QuickBlending chapter.
Here you can see two images on the timeline that are directly above each other in tracks B and C. The image in track B is set in overlapping mode so that it completely covers the image below it. In track A, there is an image above it that consists of a black circle against a white background and is to be used here as a mask.
Something like this can be created with little effort in any image editing or drawing program.
If the locator is positioned at this point, you will now see a white square with a black circle in front of the background on the canvas, as expected. You now need to adjust the circular image so that it becomes a mask. To do this, double-click on the light curve to open the Edit image window and check theImage mask box.
To the right of this you will find the entry1 tracks, which causes the image mask to affect an underlying image track. If more image tracks are to be masked, increase the number accordingly. For Overlapping,transparency, select the white option so that the white background of the circle image becomes transparent, as only the circle is to be used as a mask. The value for Toleranceshould be set to 50 percent, which is the default setting in m.objects. Confirm with OK.
You will now see that the white square has disappeared and the background image appears in the circle.
Only the black circle can now be seen in the light curve on the image track. You will also see a dark shadow behind the light curves in the two upper tracks. This shows you immediately that a mask is being used at this point.
Masked shapes that you have created in an image editing program and saved in a suitable file format such as png,tif or psd are also very suitable as masks (transparencies can be saved in these formats). Like the circle mask in the example described above, insert the corresponding image into the image track above the image to be masked.
In the Edit imagewindow (double-click in the light curve), enter overlapping, transparency: alpha channel. Then proceed as described above. The cropped shape now becomes a mask.
Certain subjects are also suitable for cropping directly in m.objects, especially if they have a uniform color tone. An example of this is a photo of a flower that is now in the upper track instead of the circle mask.
In the Edit imagewindow, select the Define color tone option under Overlapping, transparency: and then tick the Pipette option.
You can now use the mouse pointer in the canvas (the pointer now has the shape of an eyedropper) to pick up the desired color, in this case the yellow of the flower. The blossom is then displayed transparently. Click on Invert transparency to remove the blossom from its background instead and then use it as an image mask (as described for the circle mask). You have now created a mask in the shape of the flower.
The mask described in the previous section in the form of the cropped flower can be used for a dynamic fade in just a few steps. To do this, insert a zoom object into the light curve of the mask at the beginning and end of the stand time.
For the first zoom object, enter a zoom factor of 0% in the properties (double-click on the zoom object) so that the mask is not yet visible. For the second zoom object, drag the zoom factor upwards so that the image in the lower track completely fills the canvas. In this way, the lower image in the form of the blossom will now be faded in, getting bigger and bigger.
In the options for the image masks, you can reverse the effect by simply ticking the corresponding box.
This means that the image under the mask is no longer masked, but a section of it is displayed in the form of the mask.
In the following example, you can see a text in track A that was created with the m.objects title editor. In this case, there is a video sequence in the image track below. The video shows the surf on a rocky section of coastline.
The aim should now be to make only the text visible on the canvas, in the contours of which the movement of the waves is visible.
As m.objects treats texts created with the title editor in the same way as images, you can also use the texts as masks. The procedure is therefore very simple: Double-click on the light curve to open the Edit image window and check the Image mask option. Now also select the Invert effect option. In this way, the video below the text will only be visible in the contours of the text.
Of course, you can also apply the inverse effect of the image mask to a still image instead.
You can find out more about creating texts in m.objects in the chapter Creating texts with the title editor.
While an image mask has a direct effect on an image, the effect of an effect mask is directed at the m.objects special effects Blur,Passepartout, Shadow/Shadow andImage/Video Processing as well as QuickBlending. Effect masks can be used to define the area and intensity in which these special effects affect an image.
The following example shows the procedure for applying effect masks.
In contrast to image masks, effect masks are arranged below the tracks that they mask. Here you can see a landscape photo in image track A below an image in track B that contains a gradient from black in the middle to transparent at the top and bottom. This gradient will be used as a mask in a moment. Effect masks can also have any shape, from a rectangle to a gradient, as shown here, to text.
The landscape photo here has been given the special blur effect, which you can recognize by the blur object (the small square with the letter B) on the light curve.
The effect mask should now be used to mask this blurring in a specific area so that the landscape photo is displayed sharply there. To do this, double-click in the light curve of the gradient image, select the Effect mask option in the following window and confirm with OK.
In the canvas you can now see that the photo in the center is in focus, the gradient in track B now acts as an effect mask.
You can also recognize the effect mask by the red-brown shading on the tracks.
Using an image field object, you can now move the effect mask as required and thus freely position the focus area in the image.
By using two or more image field objects, you can also create an animation so that, for example, the focus area moves from the bottom to the top of the landscape photo.
In this way, you can mask any m.objects special effect so that it is only effective in a specific area of an image.
As with an image mask, you can specify in the options (double-click in the light curve) for an effect mask how many tracks it should affect. By clicking on the masked effects button, you will also find further settings.
If an image is provided with several special effects, for example with blurand shadow/shine, you can specify here whether the effect mask should affect all (default setting) or only selected special effects. Set or remove the checkmark in front of the respective option accordingly. Forimage/video processing, you even have the option of masking specific parameters here.
You can also reverse the effect for an effect mask by selecting the corresponding option.
Dubbing is an important step when working with m.objects, because it is just as important in an AV show as good visual material. There are different ways of creating a show. Whether you arrange the images and videos first and then insert the sound to go with them or, conversely, create sound samples first and then select the images to go with them is up to your personal preference or the theme of the show in question.
You can use different media as a source for the soundtrack: sound files that are already available digitally, CDs or DVDs from which you import music, sound samples that you have created with a digital recording device, for example, or spoken text with comments on the show.
If you have an m.objects workspace in front of you that does not yet have any audio tracks, these must first be set up. To do this, click on the gear icon in the toolbar and select the Digital Audio component in the tool window.
Use the mouse to drag the symbol into the free light gray area in the work window. In the following window, you can enter the desired or possible number of audio tracks, which varies depending on the configuration level of the software. Confirm with OK and then click on the cogwheel symbol again. You now have the audio tracks in front of you on the desktop.
The use of the new audio engine is the default setting in the program for new projects.
If required, the classic, external technology can be activated under Settings / Program settings with the option Use external audio service. This should only be done if conventional DirectX plug-ins are still to be used for sound processing.
The internal audio engine does the work behind the m.objects audio tracks. It has been completely reprogrammed for version 8.0 to make processing more flexible and independent of operating system and driver updates. m.objects is also able to process practically all modern audio file formats directly with the new audio engine, including Ogg Vorbis, FLAC, DSD (dsf) and many more. The sample rate (e.g. 44.1, 48 or 96 kHz), quantization (e.g. 16 or 24 bit) and channel coding are irrelevant. The conversion when using material of different sample rates within a project is carried out in superior quality.
Depending on the connected peripherals, a computer usually offers several alternative outputs for the sound. For example, you can play the sound via the internal speakers of a laptop or route it from a jack socket via cable to an amplifier connected to external speakers, you can transmit the sound via a USB transmitter or output it to a TV set via HDMI.
The desired output for the sound can be easily assigned in m.objects within the digital audio component. Simply click on the yellow arrow icon at the bottom right of the audio tracks.
A list of all available audio output devices now opens here. Click on the desired entry to select it and it will appear as a permanently visible entry to the left of the icon.
This assignment also works if an audio device was only connected after m.objects was started or after the show was loaded. You can even use the function during playback, for which it is briefly interrupted to initialize the selected output and then continued automatically.
In this context, m.objects also routes the sound of videos that have not been dubbed and sound samples set to asynchronousto the selected output device.
Of course, it is also possible to output multi-channel sound via the specific driver assignment. The text Various sound outputs then appears next to the icon under the sound tracks.
To do this, selectView / Driver assignment in the menu at the top, which will take you to the corresponding view. You will now see the assigned output device on the audio tracks, while all available output devices are listed in the tool window. You can now specifically delete the assignment on individual audio tracks (right-click on the desired entry, delete the selectionand confirm with yes ), select another output in the tool window with a mouse click and drag it to the corresponding audio track with the left mouse button pressed. This creates the new assignment and to complete the action, click on the flashing wrench icon in the toolbar to return to the normal view of the program.
To insert a sound file into an m.objects presentation, click on the red dot at the bottom right of the component frame or right-click on a free space in the audio tracks. Two options will appear: Record / insert sound file andSearch / insert sound file. If you want to insert a file that is already on your computer, select the second option Search / insert sound file. In the following selection window, first search for the corresponding file directory, select the desired file(s) and click on Open. The sound sample is now attached to the mouse pointer and can be placed anywhere on the audio tracks, provided there is enough space at the relevant position.
It is also possible to drag & drop individual or multiple audio files from Windows Explorer or the macOS Finder - either directly into the audio tracks or into the tool window of the audio component.
The tool window is also your audio pool, provided that the audio tracks are the currently active component (if necessary, simply click in an audio track). Here you can see all the audio clips from your m.objects production.
All audio clips that are actually used on the audio tracks are marked in yellow. If they are used more than once, the corresponding number is shown in brackets in front of the name. Audio samples shown in white are not currently stored on the audio tracks, but can be dragged from the audio pool into the tracks at any time.
You can also save video files directly to the audio tracks, which enables much more sophisticated post-processing compared to sound output from the image track. The full-length sound file is stored on the sound track and simultaneously transferred to the pool in the tool window.
If you want to integrate music from an audio CD, the procedure differs slightly between Windows and macOS.
On macOS, it is sufficient to grab the music track from the inserted (mounted) audio CD with the mouse and first drag it to the desktop, for example, which will already digitally read out the music. From there, you can then drag the new file directly into the m.objects audio tracks. You may be informed that this file is now outside the project directory. Please also read the chapter on file management.
Since m.objects no longer offers an integrated CD ripper compared to earlier program versions, we recommend using one of the numerous free tools (Audiograbber, EAC or similar).
In m.objects you also have the option of recording sound with a microphone directly in your show, for example comments on individual images or even on the entire presentation.
To do this, click on the red button in the bar below the audio tracks or right-click in an empty area in the audio tracks. Then select Record / insert sound file and the External recording tab in the following form.
Under Target directory, select the directory in which the recording will be saved. m.objects defaults to the Sounddirectory from your current project. It is recommended that you keep this structure, but you can change it if necessary. To do this, click on the button with the three dots and select the desired directory.
Enter a name for the recording under Destination file. By default, you will find the entryRecording, alternatively you can enter your own name for the recording. m.objects adds the date and time to the file name when saving the recording.
If MP3 compressionis ticked, the recordings are saved as mp3 files, otherwise they are saved as uncompressed wav files.
In the Recording field, first enter the device you are recording with underSignal source. m.objects automatically lists the available device(s) here. If several recording sources are available, select the desired device from the drop-down menu.
To the right, you can select an amplification for the audio signal of your recording if required. This is useful if the input signal from your microphone is too weak. You can recognize this by the input level in the form of the two vertical bars.
A signal that produces a deflection well into the yellow range is ideal. However, the level must not reach the red range, as in this case the recording is overdriven. Such an overload cannot be corrected afterwards.
To the right of the level display, you will find a slider with which you can control the strength of the input signal, i.e. amplify or attenuate the signal as required.
If the input signal does not reach the yellow area, first check whether you can find a gain control on the microphone itself. In the Windows version of m.objects, you will also find the Input button, which takes you to the Windows sound settings, which, depending on the hardware used, offer you the option of amplifying the input level.
If no further amplification is possible from the external side and the slider in the m.objects form has already been moved all the way up, select a value from the drop-down menu under Amplification and then check the level deflection. If necessary, select a different value underGain. Bear in mind that amplifying the useful level also amplifies background noise. Playing back a useful signal that is as well controlled as possible is therefore the first choice. In addition, voice recordings should be made using a microphone in an acoustically neutral environment that is well shielded from background noise. Particular attention should also be paid to any fan noise from the computer.
If you have entered a value for thegain, it is recommended that you select 32 bits for thequantization. You can use the values forsample rate and quantization to influence the recording quality. Depending on the hardware used, values of up to 96 kHz (sample frequency) and 32-bit resolution (quantization) are available. For comparison: The parameters used on an audio CD are 44.1 KHz and 16 bit, while 96 kHz and 32 bit correspond to studio quality.
In the Start field, you control the start and end of the recording.
Here you have the option of starting the recording automatically from a certain volume level, for example when you start speaking your commentary. If the auto start option is selected, you can use the slider below to specify the volume level at which the recording should start. You can easily make the appropriate adjustment by looking at the colored area next to the slider, because as soon as the set volume level is reached, the area turns purple.
With the Stop afteroption, it is also possible to automatically stop recording again after a certain period of silence; to do this, enter a value in tenths of a second. Stop after 10/10s silence therefore means after one second of silence.
Then click on theReady to record button. As soon as you start speaking, i.e. the specified volume is reached, m.objects will start recording. If the Stop after option is activated, the recording will be stopped after the corresponding duration of silence and saved temporarily. As soon as you continue speaking, the next recording will start. In this way, you can make several recordings in succession without leaving the form. After you have finally clicked OK, insert the audio clips into the audio tracks. These are initially attached to the mouse pointer; when you click on an audio track, they are placed there and m.objects calculates the audio envelopes.
You can end recording standby at any time by clicking on the button at the bottom, which is then labeled Cancel standby.
If you have not activated automatic pause, simply stop the recording by clicking onStop recording.
As an alternative to the automatic start, you can also start the audio recording manually. To do this, select the corresponding option manually and then click onStart recording or Stop recording. You can also create several recordings in succession here by clicking onContinue recording. Click OK to close the form and add the recording(s) to the audio tracks as described above.
The Playbutton gives you the option to listen to the recording you have just made. If it does not meet your expectations, you can use the Delete button to remove the recording from the buffer and create a new one.
All forms used to select or edit audio files have a mini-player with a navigable progress bar.
From file selection, displaying and setting the properties of audio files and sound effect settings to setting the playback speed and/or pitch, all the relevant forms are equipped with audio miniplayers. This has a start/stop button and a progress bar that you can use to navigate within the current audio file by moving it. The playback status is saved when you exit such a form and is adopted accordingly by other audio miniplayers. If such a form is opened when the playback of a miniplayer was previously active, the miniplayer in this form is also activated immediately.
When recording via an external sound source, you should ensure that the signal level is sufficiently high, but without overloading. The colored bars of the level indicator will reach just below the red area. If the red area is touched, this is not yet critical, but if a red bar remains at the upper end of the level display, the signal has been overloaded and the recording is normally unusable.
When recording digitally from CD, you cannot influence the level; an exact image of the data contained on the CD is created. As a rule, this should be optimally levelled when viewed in the context of the entire CD.
Occasionally, a recording may only show slight level fluctuations. You can recognize this by the fact that the thin upper line of the dynamic curve at the loudest passages of a sample used does not come close to the upper boundary line of the audio track.
If you have increased the volume manually or used sound effects (see below), the exact opposite can also occur, i.e. clipping can occur. When using sound effects, this is not readily apparent from the envelope curve.
In all these cases, you can select the Find peak level (selection) command from the context menu of the narrow bar below the sample. If thedetermined peak level field contains a value not equal to 0dB, you should check the Automatically normalize box to bring the level to the optimum level.
If your arrangement appears balanced overall, i.e. you have set the volume ratios of all samples to your liking, the mix of all sound passages used can of course deviate from the optimum level. For example, overlapping samples could lead to clipping. To avoid this, you should select the Find peak level (all samples) command in the context menu of the audio tracks after you have finished working on the sound and before the demonstration or the preparation of a demonstration medium if you do not click on a handle or bar. Here, too, you should normalize for deviations from 0dB. This function can be called up as often as you like, as it has no effect apart from a possibly necessary uniform correction of the levels of all samples.
You will often not use the imported sound samples as they are, but cut them first. This is also very easy to do in m.objects. You can cut the beginning or end of a sample directly on the audio track. To do this, simply move the first or last two handles of the envelope to the right or left. The sound sample is then shortened by the shifted piece.
If you want to cut out a section within the sound piece, first position the locator at the beginning of this section and execute the key combination ctrl/ctrl + K, or right-click in the sound envelope and select the option Cut medium at locator from the context menu. Repeat this process at the end of the section you want to cut out. m.objects inserts a cut at exactly these points.
Instead of cutting the audio at the locator position, you can also move the mouse pointer to the desired cut position, right-click and select the Cut audio command from the context menu.
The vertical lines and the new handles show that the sound file is now divided into several parts. Click on the narrow bar below a partial sample to select it. The bar is highlighted in color. You can now delete this partial sample. Once you have removed the section, you can still change the bleed by selecting all the handles at the interface and then moving them to the correct position with the mouse. If too much or too little has been cut out, this can be corrected by pulling the partial samples apart and subsequently moving the bleed (i.e. the last two handles of the left or the first two handles of the right partial sample).
By duplicating partial samples - e.g. a chorus cut out at a suitable point - you can extend a piece of music. To do this, drag the partial sample with the mouse and press the ctrl key before releasing it. Incidentally, this technique is generally suitable for duplicating all objects.
To easily move the content of an audio clip that has already been created and inserted on the timeline, you can hold down the [Ctrl] key in the same way as for video clips and drag the dynamic display while holding down the right mouse button. This change applies to all selected clips at the same time, including any selected video clips (this is automatic for dubbed/grouped material), so that the synchronization of image and sound is maintained.
After you have inserted the sound samples into your production and edited them, the sound tracks may contain different file formats such as uncompressed WAV files or various video formats. For reasons of better performance and lower memory requirements, it is advisable to convert all files on the audio tracks to MP3 format. For example, if you want to transfer a complete project to an external data carrier, it will take up significantly less space there after the sound samples have been converted. On the other hand, MP3 provides excellent sound quality so that the conversion does not result in any audible loss of quality.
The procedure is very simple: Select File / Compress audio files from the menu. All sound samples that can be compressed are displayed in the following window. In the upper part of the window, the program lists all the files that are on the audio tracks. Here you can manually select which files are to be compressed to mp3. The files can be selected individually, but you can also use the buttons below the list to select all audio files that are not in mp3 format.
The Synchronization problemsoption is only relevant if m.objects has been deliberately switched to use the old audio engine in theprogram settings . In this case, theUse external audio service option is selected there. If this is the case, synchronization problems are audio files with variable bit rates, which can cause problems with the synchronicity between image and sound during playback. By recompressing, m.objects creates a constant bit rate, which avoids these problems.
In the lower part of the window, the program lists all files from the audio pool that are not used on the timeline. You can also select these for recompression.
Enter the quality level for compression below. 192 kbps is the default setting and provides an excellent result. A higher setting would result in considerably larger MP3 files, much less would result in audible losses.
Confirm the dialog with OK. m.objects then compresses the files and automatically inserts the MP3 files in the show and in the toolbox. The original files are retained, as the software cannot know whether they will be needed elsewhere.
You can activate auto ducking for one or more audio tracks. This ensures that the volume of audio clips on neighboring audio tracks is automatically lowered in the area where there are audio clips on this audio track. A typical application is the insertion of spoken text on the auto ducking audio track with simultaneous volume reduction on the other audio tracks. This ensures that the spoken text is easy to understand while atmospheric sound can still be heard in the background.
To activate auto ducking, double-click on the desired audio track.
In the following window, first select whether the volume should be lowered for all other audio tracks or only for a certain number of audio tracks below. In the latter case, enter the desired number in the field next to it.
You can deactivateauto ducking again later if required with the option switched off.
With the Fast forward value, you specify how long before an audio clip on the auto ducking track auto ducking is active. If this value is set to 2 seconds, for example, the volume reduction on the other audio track(s) begins 2 seconds before the audio clip. TheFade-Out value in turn determines how long the volume reduction lasts. For example, the value for fade-out can also be longer than the fast-forward, which means that a spoken commentary can already be heard while the background sound is still getting quieter.
Accordingly, theTracking value determines how long auto ducking is active after the audio clip, i.e. the time until which the volume is raised to the original value on the other audio track(s). The Fade-in value then controls how long this increase lasts. Here too, tracking and fade-in can of course have different values.
The attenuation determines how much the volume is reduced on the other audio track(s).
If required, you can reset the values to the default settings specified by the program by clicking on the Defaultbutton.
Once you have entered the desired values, confirm the form with OK. The audio track activated for auto ducking is now colored dark red. If you then drag an audio clip onto this track, it is given a dark grey background that extends downwards over the number of audio tracks entered.
The width of the dark grey background clearly shows the leading and trailing of the auto ducking (see above). The volume of the audio clips on the other audio track(s) is lowered accordingly in this area. If you move the audio clip on the auto ducking track, the volume reductions are automatically moved with it.
Alternatively, you can also activate the settings for auto ducking for an audio clip alone via its properties form so that only this clip - regardless of the settings of the audio track on which it was saved - triggers a volume reduction of audio files on neighboring tracks. To do this, double-click in the sound envelope of the audio clip and in the following form on theAuto ducking button.
This opens the form for autoducking described above, in which you enter the desired values as explained in the previous section and then confirm with OK. The individual settings of the audio clip overwrite the settings of the audio track on which the object is located.
From m.objects live
When activated, auto ducking ensures that the volume of the sound is automatically lowered at waiting marks. In this way, auto ducking saves you some time-consuming work steps to adjust the sound at the wait markers. Waiting marks in m.objects are part of Speaker Support. A detailed description of the use of Speaker Support, wait marks and asynchronous sound can be found in the Speaker Support chapter.
If you insert a queue marker into your show using the Insert queue markers and adjust timingwizard , check the Use auto ducking (volume reduction)option in the form . You can read more about this wizard in the chapter Wizard: Insert wait marks and adjust timing. Click onDucking settings to open the auto ducking properties window in which you can enter the desired values as described above. For existing or manually inserted queue markers, double-click on the queue marker symbol to access the settings. Click on OK to insert the wait marks or apply the corresponding settings. On the timeline, you will now also see a dark gray background on the audio tracks that you have set up for auto ducking. The width of this marked area again corresponds to the pre-roll and post-roll set up (see above)
If you now drag an audio clip onto a ducking track, the sound is automatically reduced to 0 db in the area of the wait mark(s). This complete reduction is necessary as the sound would otherwise stop abruptly at the wait mark.
However, you can also select the asynchronousoption in the sound properties (double-click in the sound curve). This causes the sound to continue at the waiting mark, for example as background music during a passage of a lecture with live commentary. In this case, the auto-ducking will only attenuatethe sound to the level previously entered in the form.
In the sound properties, you will also find the option only fade in at wait marks with auto-ducking (otherwise mute) . If you select this option, the respective audio clip will only fade up in the area of the wait markers according to the settings in Auto-Ducking. It cannot be heard at all other passages in the show. Changing the playback speed of the sound - slow motion / fast motion
You can use the Speeddynamic object to change the playback of the sound in a variety of ways. For example, you can increase the effect of atmospheric original sounds by slowing them down in order to add a particularly impressive or dramatic tone to scenes. It is also possible to vary the playback speed of the sound without changing the pitch. You can also adjust the pitch without affecting the speed.
A detailed description of the speed object and its use on sound samples can be found in the chapter Speed/Pitch - dynamic slow motion / time lapse and time stretching.
The global dynamics settings are available in all m.objects expansion levels. Sound effects are available in all live, creative, ultimate and pro licenses.
To apply effects to a sound sample, first double-click on the volume envelope. In the following window, you will find a button labeledSound effects.
Note: If you see the label "Sound effects (directX PlugIns)" here, the external audio service is still activated, which has been replaced by a completely new audio engine in m.objects version 8.
It is now not advisable to use the external
audio service, as this technology is no longer up-to-date and, above
all, is significantly less compatible with different audio formats
than the new audio engine. Furthermore, it can be assumed that the
external audio service will no longer be supported by future program
versions. To use the new audio engine, select "Program settings"
in the "Settings" program menu and uncheck the "Use
external audio service" option in the following window. Then
continue editing as described.
Click on the Sound effectsbutton to open the corresponding settings window. By activating the Playback option at the bottom of the window, you can check the effect of the changes you have made as you make them. The sound runs in a loop as long as this option remains activated.
In the top line, you will find the options for alternative stereo settings. To use them, check the box next to Change stereo mix.
Here you have the option of swapping the left and right sound outputs(L/R swap) or outputting both channels together as mono sound(L+R to mono). You can also output the sound signal from the left or right channel to both outputs(L to monoor R to mono).
Below this, you will find the equalizer, which you can apply using the Activate equalizer option. This gives you the option of adjusting the highs, mids and lows of a sound sample separately.
Here you can see three handles that you can move up and down by dragging them with the mouse to adjust the pitch accordingly. The further you drag a handle upwards, the more the respective area is emphasized. The handles can also be moved to the right and left so that you can also make differentiated adjustments within the highs, mids and lows.
By left-clicking in the equalizer diagram, you can add any number of additional handles. This allows you to make differentiated adjustments. You can remove individual handles again by clicking the right mouse button.
Once you have made the settings in the equalizer, you can also save them by clicking on the Save EQ settings button and assigning a name. This allows you to save different equalizer settings and simply click onLoad EQ settings to reapply them to other sound samples later.
For the reverb effect, first check the Enable reverb effect box. Two options are available here: the reverb duration and thereverb mix. You can change both values using the corresponding slider or by making a numerical entry.
The reverb duration determines how long the reverb effect is audible. Use the reverb mix to set the intensity of the reverb effect. The higher this value is, the stronger the reverb effect is mixed into the sound sample.
The dynamics processor plays a special role here. This makes it possible to compensate for deviations in loudness (i.e. the perceived volume). This is possible both within an individual sound sample and across the entire presentation. You will therefore find both global and individual dynamic settings in m.objects.
To make the global settings, first close the form for the sound effects and then the properties window for the sound sample - in each case with OK or Cancel, depending on whether you want to save changes or not.
With active audio tracks (if necessary, left-click in an empty area of the audio component), you will find the Global dynamics settings tool in the tool window.
Double-click on this tool to open the corresponding form. To use the settings here, first check the Enable dynamics processor box. The dynamics processor has the effect of reducing major deviations between quiet and loud passages in the playback of the sound without, however, leveling out short-term developments in the dynamics. For example, the character of music with timpani beats etc. is retained, but it is always clearly audible over the entire duration, even with strong fluctuations in loudness, but not too loud. But even texts spoken at different volumes or pieces of music from different sources that differ greatly in terms of dynamics are effectively harmonized.
The picture shows the same piece of music on two audio tracks: the unprocessed original in the upper track and the sample processed with the dynamics processor in the lower track. It is clear to see that the quiet passage at the beginning in particular has been significantly boosted, while the loud passages have been slightly attenuated.
Use the slider or enter a numerical value to determine the maximum level of boost for quiet passages. Click OK to confirm your entry. The settings you have made will now affect all sound samples used in your m.objects production in which the "global setting" option is selected for the dynamics (this is the default setting).
Now return to the individual sound effects: Double-click in the sound envelope of a sample and then click on Sound effects.
The Global setting option is initially preselected under Dynamics processor. This means that the settings from the global tool described above are adopted. If you select the individual setting option instead, you can enter a different value for the relevant sound sample that overwrites the global settings. Here too, you can enter the value using the slider or numerically. Use the deactivate option to switch off the dynamics processor specifically for the selected sample.
You will also find the Activate compressor option in both the global and individual settings for the dynamics processor.
You should only select this option if you are giving a presentation in a relatively loud environment. The compressor has a much shorter effect, so that an even more constant and powerful loudness is achieved overall. However, this is audibly at the expense of the character of sophisticated music. The advantage is that the sound is always consistently audible in a louder environment.
You can save all the settings you have made in the form for the sound effects.
To do this, simply click on the Save settings button and assign a suitable name. You can then use the Load settings button to apply these or other saved settings to other sound samples later.
Videos can be processed in m.objects just as easily as images and sound samples. In the following chapter, you will see that the software can do much more than just play back videos in the usual high quality. Video editing is just as possible as post-processing the sound and applying dynamic objects and mask effects.
Almost all digital cameras today offer the option to record videos, many of which even meet professional standards in terms of quality. Mobile devices such as smartphones also deliver videos in sometimes remarkable quality. And just as there are different file formats for images and sound samples, videos also come in different formats and resolutions.
In order for a computer to be able to play back different video formats, certain requirements must be met. First of all, this means that a corresponding decoder must be installed on the computer for each type of video. The decoders for the most common video containers and video formats for playback directly from the timeline are already included in m.objects. Only for Windows Media Video and a few very rarely encountered formats does m.objects access the decoders installed globally under Windows.
Up to and including m.objects v7.1, it was assumed that the majority of the computing power for video playback must be provided by the system's main processor, while the graphics card is mainly used for animations and real-time effects. However, modern video formats (H.264, HEVC) and high resolutions (UHD, 4K and more) cause a considerable processor load, especially when playing several video clips at the same time, for example with a cross-fade. On the other hand, graphics processors, now the most complex component of modern computers, are sometimes significantly more powerful than the processor, especially for operations such as video decoding. To exploit this advantage, m.objects has been able to completely outsource the decoding of modern formats (WMV3, VC1, H.264, H.265/HEVC, VP9) to the graphics card since version 7.5. The result is smooth playback of even extremely high-resolution videos with high frame rates and modern encoding.
Please note the following:
For processing 4K videos, graphics hardware with
at least 2GB, preferably 4GB of video memory is recommended. However,
older computers in particular and those with less powerful graphics
hardware or less video memory may deliver a better result if hardware
decoding is not switched on.
When the program is started
for the first time, m.objects generally recognizes the appropriate
default setting itself and either automaticallyselects
hardware decoding if the graphics hardware is suitable or always
selects it if the graphics hardware is not suitable. The settings
for this can be found underSettings / Canvas settings in
theRealtime renderer tab and there under theHardware
decoding for video option.
If m.objects does not know the suitability of the graphics hardware used for video decoding and has set the value to always off, you can enter the value autom. (recommended) instead. Test your computer thoroughly for its suitability for hardware-supported video decoding. If the display is jerky or there are picture errors, try the Standard orMultiscreen values instead. If this does not improve playback, the graphics hardware is not powerful enough and you should deactivate acceleration with always off.
In m.objects, you can even control the use of hardware-supported decoding individually for each video via its properties form (double-click in the light curve).
This means that an optimum distribution of the computing load can often be achieved even with medium performance from both the CPU and graphics chip.
Video sequences can be inserted into an m.objects production in the same way as photos. The initial import of video files takes place in two stages: m.objects first signals the status of the analysis of the new video. Only after this has been completed is the content placed with a mouse click. This prevents invalid placement while the video clips are still being analyzed and without exact information about their actual playing time.
You can also use the lightbox here, which you first open using the corresponding button in the toolbar. Double-click in one of the empty fields to open the selection window where you can select one or more videos and insert them into the lightbox by clicking OK. This allows you to pre-sort videos and, if necessary, photos in the lightbox, select them and drag them onto the image tracks by holding down the left mouse button. As with images, the video is now attached to the mouse pointer. Release the mouse button at the appropriate point in the image track and the video is placed. Similar to the envelope curve of a sound file, a light curve of the length of the clip is created with individual images from the video sequence. The larger you set up the display of the image tracks (e.g. using the plus button), the more images are displayed.
A detailed description of the lightbox can be found in the chapter Inserting images via the lightbox.
It is even quicker to click on the red dot below the image tracks. Select Insert video clip and this will take you back to the selection window where you can select the desired video and confirm with OK.
The video is then attached to the mouse and can be inserted into the image track.
Alternatively, you can simply open Windows File Explorer, select the video in the relevant directory, drag & drop it onto the image tracks and drop it in the appropriate place.
As an alternative to displaying preview images across the entire width of the video light curve, only the first and last frame of the visible section can be displayed. To do this, select the option Show only first and last frame in video clips under Settings / Program settings.
For all three procedures, it is important to ensure that there is enough space on the image track for the video to be inserted. If you see a crossed-out circle symbol next to the mouse pointer when placing the video sequence, this means that there is not enough space. In this case, either place the video elsewhere or move the subsequent images to create the required space.
If you drag several video files onto the image tracks at the same time, the *Standard tool is used to determine the fade and stand time, as is the case with images. As the default value is usually not useful for videos, it is advisable to drag video files individually onto the image tracks.
When playing back video clips with frame rates (also known as frame rates, i.e. frames/s) that cannot be evenly distributed over the frame rate of the output device (e.g. 24, 25 or 50 frames/s on a 60Hz monitor or projector), there is usually a uniform but sometimes very annoying jerking during playback. This effect is called pull-down and cannot be avoided with conventional techniques. m.objects has a special technique for smoothing the playback of such problematic videos, which largely suppresses this disturbing effect in most cases (up to certain movement speeds). The technique is only applied automatically if the frame rates of the video and the output device do not match, but can also be suppressed specifically for each video clip. To do this, double-click on the light curve of the video and open the properties window. Here, uncheck the Automatically adjust frame rate option.
The technology also works very effectively when the content of a presentation, which was deliberately filmed at 60 frames/s, is later to be exported as a video at 50 frames/s for European TV sets.
Without further action, m.objects outputs the sound of a video - if available - directly coupled with the image via the image tracks. If necessary, the sound is faded up and down together with the image.
However, if you want to export a video file from m.objects as a final product or edit the sound of the video in a differentiated way, you must also save the video to one of the sound tracks. However, you do not need to carry out this step yourself; one of the m.objects wizards will do it for you: Double-click on the light curve of the video to open the Video clip properties window.
Here, click on the Transfer soundtrack to "Digital audio" button at the bottom, whereupon the wizard opens. Alternatively, you can also find the wizard under Edit / Wizards / Create video soundtrack separately on Soundtrack.
Confirm with OK and then the video sound is stored in an audio track and the video on the picture track is muted at the same time so that the sound output is not superimposed. A detailed description can be found in the chapter Wizard: Separate video sound on audio track.
This gives you all the options for editing the video sound that you have with other sound samples.
If a video has several audio streams (soundtracks), for example for different languages or for sound from different sources, specify which audio stream should be played in the video properties form underSoundtrack.
The selection here is also taken into account when using the audio dubbing wizard (see above), i.e. the preselected audio stream is extracted into the audio tracks.
Soundtrack 0is set as the default here. If there are other audio streams, these are displayed in the list with Soundtrack 1, Soundtrack 2, etc. By selecting mute, you deactivate the video sound in the picture track. Any previously extracted sound in an audio track will of course still be played back.stabilize or reverse video files
A detailed description of how to stabilize videos and reverse the playback direction can be found in the chapterWizard: Stabilize or reverse video files.Frame-accurate video editing in m.objects
m.objects offers very convenient options for cutting videos exactly to the individual frame - i.e. to any selectable individual image from the video sequence.
If the locator is positioned on a video sequence, you can use the right and left arrow keys on the keyboard to navigate frame by frame backwards and forwards through a video and follow the corresponding position in the video on the screen. In this way, you can determine the exact position for video editing. With the key combination ctrl/ctrl + K, m.objects then carries out the cut at the selected position. Alternatively, right-click in the video on the image track and select the option Cut medium at locator in the context menu.
If the video is dubbed, i.e. the video sound is on a separate sound track, and the video and sound are grouped together, m.objects performs the cut simultaneously in both the video on the picture track and in the sound.
Preview enlargement is another option in m.objects for cutting a video precisely.
If you drag the mouse over the light curve of the video while holding down the Shift key, the corresponding preview image is displayed enlarged at the position of the mouse pointer. This function is also available if the display is set so that only the first and last frame appear in the light curve. It is advisable to enlarge the work surface as much as possible using the magnifying glass button in the toolbar, as the desired position for the cut can be found quickly and precisely in conjunction with the timeline display, which is accurate to 1/1000s.
Once you have selected the location, right-click on the light curve of the video and select the Cut video command in the context menu.
If you want to cut out a specific area within a video sequence, make a cut at the beginning and end of this area. Then select the area and delete it with the [Del] key or via the context menu with the Delete selection command.
m.objects now closes the resulting gap automatically. The rear part of the cut video is moved to another image track to ensure a seamless transition between the two parts.
Alternatively, you can also create a cross-fade from the front to the back of the video sequence. To do this, move the handles as shown in the following image. As the two videos are set to overlapping mode (image blending), this type of crossfade will give you a smooth transition from the first to the second part of the video clip and avoid a loss of brightness during the crossfade.
In most cases, you will not use videos in their full length in your production, but only need an excerpt from them. For this purpose, m.objects offers you the option of cutting videos to size before producing your AV show so that you can simply insert them later at the appropriate point.
To do this, first proceed as described above, i.e. insert the video into an image track, create the desired section by cutting and then delete the video from the track again. This returns it to the lightbox, where it retains the in/out times, i.e. the information at the start and end of the section. If you then want to insert the video into your production, drag it from the lightbox to the desired position in your production. This allows you to manage video clips clearly in the storyboard.
Double-click on the light curve to open the video properties window. Here you will find the information for In and Out, i.e. for the start and end of the video or the section of the video.
m.objects usually displays the times for In/Out in the format h:m:s:ms (i.e. hours, minutes, seconds and milliseconds). This information is relative to the start of the video stream within the file used. For an uncut video, the value 00:00:00.000 appears here for In, but if the video is cut at the beginning, a different value appears, for example 00:00:02.075.
You can also enter the entry point (In) and exit point (Out) manually here and define a section of the video in this way. To do this, either enter the values in the input fields or click onStart so that the video runs on the screen and then click on the button below the input field for In or Out at the desired point. The corresponding times are then entered in the field. The playing time of the clip on the timeline is automatically adjusted as a result.
If a video file also has an SMPTE time code (in the format h:m:s:f, i.e. frame number instead of milliseconds), this information is automatically used.
It then usually contains an indication of the real time of the recording. This is used, for example, to synchronize overlapping takes recorded from different camera angles.
If you want to move the visible content of a video in time, you can also hold down the [Ctrl] key and drag the light curve of the video with the right mouse button pressed instead of manually changing the start offset or moving various handles. Both the display on the timeline and the canvas content visualize the change while you are dragging. This change applies to all clips selected at the same time, including any audio material from dubbing, so that the synchronization of image and sound is maintained.
In most cases, videos are not inserted into a presentation in full length, but are cut to a specific section. With the help of a wizard in m.objects, you then have the option of saving this video section as a new video without any loss of quality and replacing it with the new video on the image tracks with frame accuracy. You can find a detailed description of this in the chapterWizard: Trimming (shortening) video files without loss of quality.
From the live expansion level upwards, m.objects offers extensive options for editing important video parameters such as brightness, contrast, sharpness and more directly on the image tracks with the image/video processing dynamic tool. Together with the m.objects editing function, this makes the use of external software superfluous in many cases.
You can read more about how the image/video processing object works in the chapter Color grading with image/video processing.
You can use thespeed/pitch dynamic object to change the playback of a video in a variety of ways, for example by slowing it down to create a slow motion effect or speeding it up to create a fast motion effect. In addition, this object can also be used dynamically so that the change in speed appears as a dynamic effect. For example, a video can be slowed down from normal speed to a still image.
A detailed description of the speed object and its use on videos can be found in the chapter Speed/Pitch - dynamic slow motion / time lapse and time stretching.
The dynamic objects in m.objects offer numerous exciting possibilities for integrating images into a multivision. They can also be applied to videos in the same way.
If you want to mount a video as a picture-in-picture effect for design or resolution reasons, use the image field object.
You can see that the background image has been blurred using the image editing functions in m.objects so that the video stands out better in front of it. You can change the position of the video within the canvas as required by dragging an image field object onto the light curve of the video and either entering numerical values in the editing window of the image field object or moving the image field by dragging it with the mouse using the arrow controls. Make sure that the checkmark between the linking symbols is set.
Alternatively, you can simply click on the image field symbol on the light curve and move the video directly to the desired position on the canvas. If you hold down the Shift key while doing this, you can move the image field with the video exactly horizontally or vertically.
As with images, a static or dynamically moving image section can also be displayed for videos. This is particularly useful if the resolution of the video is greater than its effective display, for example if you are showing a video in 4K resolution on a screen with Full HD resolution.
The basic procedure is to drag one or more zoom objects from the tool window onto the video in the image track and drop them here. The editing window of the zoom object offers you the necessary options for changing the zoom factor and the positioning of the zoom center. Alternatively, you can also position the center of the zoom directly in the canvas.
As with images, you can use this method to create static zoom effects in the form of an enlarged section, as well as dynamic zoom movements through a video. You should always make sure that the video used has a sufficiently high resolution. Otherwise, an enlargement would cause blurring and raster effects. However, you do not need to worry about an additional load on the processor or graphics card by applying the zoom object to a video.
Zooming through a video is primarily useful if the movie was shot from a fixed camera position and with an unchanged focal length. You can then use the zoom object in m.objects to subsequently zoom into the movie and incorporate camera pans.
Even if a video clip has already been recorded with zooms or camera pans, the additional use of the zoom object in m.objects can lead to exciting results.
The 3D object can also be used in a variety of creative ways with videos. Basically, you can use it to rotate and rotate videos in all conceivable directions and with different centers of rotation. Here too, the additional animation of the video with m.objects is only slightly more demanding on the computer.
Another interesting feature is the option of displaying a video clip with a distorted perspective using the 3D object and thus integrating it into an existing scene. The
will be shown here using the example of an old tube television. The aim is to run the video on the TV screen, an effect that can be achieved with relatively little effort.
To do this, first create a mask in Photoshop or another image processing program in the shape of the screen and place it on an image track in m.objects. The image with the TV set is located in the track below and the video below that (see arrangement in the screenshot above). In the image properties of the mask (double-click in the light curve), it is specified that it serves as an image mask for a track. This makes the image with the TV transparent at the masked point so that the video from the lower image track can be seen there.
An image field object brings the video to the right size and position. A 3D object is used to insert the video into the scenery in the correct perspective. To do this, make the appropriate settings in the properties form for the 3D object.
The main task here is to adjust the value for the Y-axis so that the video is positioned correctly on the screen. The TV and mask are also provided with 3D objects to reinforce the perspective already present in the image.
This function is available in m.objects from the creative expansion level.
With chroma keying, you can create a transparent area in a video through which an underlying image becomes visible. In turn, you define the transparent area via a brightness value, a color value or a hue in the video. The separation between the visible and transparent areas of the video becomes clearer the more these areas in the video differ in terms of their color tone or brightness.
A classic application example of chroma keying is the bluebox or green screen, a process that is often used in film or television productions: a person acts in front of a monochrome blue or green background on which the viewer then sees an image or graphic, giving the impression that the person is in a corresponding scene or standing in front of a weather map, for example.
The screenshot shows a video that was shot in a bluebox. It is in the upper image track A, in track B below is an image that will later appear in the blue color range of the video.
To do this, double-click on the light curve of the video so that the editing window opens.
In the middle, you will see the options for image mixing. You can also find detailed information on this in the Image mixing chapter.
Unlike images, m.objects inserts videos in overlappingmode by default. For transparency, you will initially find the entry None. Select the SmartKey option from the drop-down menu and then click on the box next toEyedropper. Then use the pipette to pick up a medium color tone from the background. To reliably recognize the effect of the clipping, select the Single image option at the bottom of the video clip properties window. You can use the tolerance slider to make fine adjustments if necessary. The blue area of the video is now transparent and the woman appears in front of the background image as soon as you confirm the window with OK.
You can also select Define color or Define huehere. The tolerance of the selection is much lower, so that darker or lighter areas may not be included in the selection and will then not be transparent or only partially transparent. In this example, SmartKey is the better alternative, as the blue background is not completely evenly illuminated. The additional SmartKey Reflectionoption is useful if the subject to be cropped contains reflections of the background color. This can be the case, for example, if a person with a white shirt is filmed in front of a green screen and the green color of the background is reflected in the shirt. The SmartKey Reflection option then ensures that these reflections are also removed from the image.
Of course, chroma keying with m.objects does not require videos to be blueboxed. It offers exciting possibilities for experimenting with transparency effects in videos. For example, a second video can be placed underneath, which then appears in the transparent area, or a tracking shot or 3D animation.
It can also be particularly attractive to use the video cropped with chroma keying as a mask. As with still images, simply select the Image mask option in the video editing window and then enter the corresponding number of image tracks to be masked. The moving image in the video gives you a moving mask.
m.objects can use video files in the following containers and formats - among many others - directly in the timeline:
- Compression: H.264, H.265 (HEVC), VP9, MPEG-2, MPEG-4, MJPEG, DV, HDV, WMV, DivX, Apple ProRes and many more
- Container: Apple Quicktime (*.qt, *.mov), ASF, MTS, M2TS, VOB, AVI, FLV and others
In addition to video editing and cutting sound files, m.objects also offers the option of cutting the entire content of the image and sound tracks at the position of the locator. This means that all media are cut at the corresponding position, i.e. the light curves of images and texts are also split. This can be particularly helpful if you want to cut certain areas out of an m.objects show or split the show into several parts.
To do this, first position the locator at the point in the show where you want to perform the split. Now execute the key combination ctrl/ctrl + K. This will now split all light curves or, if you have executed the key combination in the audio tracks, all audio files at this point.
If there are one or more videos at the position in question, the key combination ctrl/ctrl + K will initially only split these videos. If you then execute the key combination ctrl/ctrl + K again, the other light curves will also be cut at this point.
If you hold down the Shift key in addition to the key combination ctrl/ctrl + K, all media are split directly in the image tracks, i.e. images, text and videos.
It is important to check whether the Select shortcuts icon is activated in all components in the toolbar.
When the symbol is activated, a green dot appears next to the three arrows.
If the symbol is not activated, splitting will only take place in the image tracks. If the symbol is activated, however, pressing the [shift] key will split all media in the image and sound tracks, i.e. all images, texts, videos and sound files. At this point, a cut is made through the entire production on the m.objects timeline.
m.objects also offers the option of automatically combining video and audio media into groups.
This is particularly useful if you have previously separated the sound from the videos into the audio tracks.
A crucial point with this method of cutting media at the locator position is that all dynamic objects used in the cut object, such as image fields, zoom and image/video processing, are also created accordingly in the newly created section. If the cut is made within a dynamic action such as a zoom move through an image, m.objects creates new dynamic objects at the cut edge. This means that the zoom move (or other dynamic action) is continued exactly at the corresponding point in the new section of the production.
You can also limit the cut at the locator position to selected media. To do this, select the desired images, texts, videos and/or sound files and then press the key combination ctrl/ctrl + K. The cut is then only made in the selected media objects.
m.objects offers comprehensive functions for color correction and post-processing of images and videos. There are also options for global correction and post-processing of an entire show.
The dynamic objectimage/video processing is available from the m.objects live expansion stage.
It offers important editing functions that can be used both statically and dynamically for animations. You will find theimage/video processing object - like all other dynamic objects - in the tool window when the image tracks are activated.
The special feature of this object is already indicated by its name: It can be applied to both images and video sequences. This provides m.objects with extensive options for post-processing images and videos, making the use of external programs superfluous in many cases. The object works non-destructively, meaning that the original files are not changed in any way.
To use, drag the image/video processing object onto an image or a video on the image tracks. A corresponding icon will appear in the light curve and the corresponding editing window will open. You can access the editing window at a later time by double-clicking on the icon.
On the left-hand side you will see theeffect settings and the first item there is a selection between white balance and tint.
You can use the white balance to correct an image or video if necessary, for example if it has a color cast. To use it, first select the Eyedropper option. If you now drag the mouse pointer into the m.objects canvas, it will take on the shape of an eyedropper, which you can use to pick up a suitable color value for the white balance. To do this, click on a point in the image that is as bright as possible and that should have a neutral color value in the subsequent result, i.e. should be displayed in white or light grey. You can now see the result of the correction directly in the canvas, while the color previously selected from the uncorrected image or video is displayed in the Neutral point colour field in the form window. The example shows the uncorrected part of the image in comparison with the correction using white balance.
If you are not yet satisfied with the result, repeat the steps described above and select another point in the image as the neutral point.
As m.objects image and video processing is a dynamic object, you can also change the white balance dynamically. This can be particularly useful for videos with tracked white balance by the camera in order to adapt the white balance to the respective scene.
If you want to tint an image or video with a specific color instead of a white balance, first select the Tint option.
Click in the Target colorfield to open the color selection, where you can select any color.
To reset the color selector to white if required, simply click on the Set to pure white button at the top right.
You can see the effect of your selection as a live preview directly in the canvas. Alternatively, you can of course also use the eyedropper tool here and pick up a color value from the canvas for the tint. In this case, the mouse pointer retains its eyedropper function even after a mouse click so that you can repeat and change your selection several times. With the Level setting, you determine the intensity of the tint by either entering a value there or setting it with the arrow control. These changes also appear immediately on the canvas. In this way, interesting effects can be achieved, such as coloring in a sepia tone, which artificially ages an image or video, giving it a quasi-antique appearance.
By clicking on the Set to pure white button, m.objects sets white as the target color, which converts the selected image into a grayscale image. Depending on the subject, this effect can be effectively refined using the other correction options in image/video processing.
Under the settings forwhite balance or tint, you will find the other correction options in the image/video processing window:
Use the Gamma value to change the brightness distribution between the darkest and lightest points in the image or video.
You can use the contrast control to increase or decrease the contrast.
Gainmultiplies the brightness values according to the setting made. In contrast to the Brightness (Offset)slider (see below), the Gain slider increases the existing exposure without affecting the black level of an image or video. A photo or video that is not fully exposed can therefore be corrected with just one slider without compensating for the contrast. There are also scenarios in which the correction can be used dynamically. If, for example, only a dark section of an otherwise perfectly exposed image is visible at times during an animation, the brightness can be pushed dynamically to make details more visible during this period. The dynamic use of the gain control is also particularly useful for video sequences, for example to temporarily push passages of the sequence that are too dark.
The brightness (offset)slider, on the other hand, shifts the brightness values in the image or video up or down. It therefore adds a value to the existing brightness values.
Saturation increases or decreases the saturation of the colors in the image or video.
You can use the Sharpeningslider to effectively sharpen an image or video.
You can make the desired changes and corrections either manually by entering numerical values or even more intuitively by clicking and dragging using the arrow controls. You can read more about using the arrow controls in the chapterWorking with the arrow controls. The respective changes can be followed in real time on the screen.
The two buttons Compare original and Compare before are particularly practical. If you click on the Compare original button and hold down the mouse button, the original image or video clip appears in the canvas without the changes you have made. Alternatively, press and hold the D button to compare your corrections with the original image or video. The Compare beforebutton works in the same way (alternatively with the C button), but displays the last version of the image or video saved in the show so that you can easily compare the last changes with each other.
If theShow clipping during editing option is selected at the bottom of the form, m.objects will indicate by flashing in the canvas if the corrections made result in the brightest or darkest areas in the image or video no longer having any drawing, i.e. being clipped. This is an effective way of avoiding incorrect correction or post-processing.
As already described in the white balance example, you can also change all of these values dynamically. To do this, add another image/video processing object to the right of the light curve and change the desired value. As with all other dynamic objects, m.objects automatically takes care of the animation between these objects. If necessary, add further objects to refine the animation with intermediate steps.
The other settings for the dynamic effects can be found in the editing window of the object on the right-hand side under Dynamics. A detailed description can be found in theDynamics options chapter.
At the top right of the image/video processing window, you will find the Color grading LUT option. This is about color grading with lookup tables, with which you can change the color effect of videos and photos in a variety of ways and creatively alienate them. A detailed description of this follows later in this chapter.
Global color grading is available in all m.objects expansion stages.
In the canvas settings(in the context menu of the canvas or in the menu under Settings), you will find all the options under Post-processing that are also available to you with the Image/video processing dynamic tool. Operation is therefore the same as described in the previous section of this chapter. In contrast to the dynamic object, however, all changes you make here affect the entire presentation.
A global adjustment of the white balance or tint can be particularly helpful to compensate for a color cast - for example, when using a projector whose color display you want to correct. This can be done with just a few mouse clicks and completely without recalculation of content, so that the display can be quickly adjusted to unfavorable ambient light conditions or the color display can be corrected on site immediately before the start of the presentation.
One possible application for the global use of the Gain slider is an image projected onto a relatively small surface with a powerful projector, which appears unpleasantly bright. If you adjust the gain value downwards in this case, you can effectively remedy the situation.
You can also select alookup table (LUT) here to set up a specific color look for the presentation. This allows you to set a LUT for the entire presentation and combine it with correction LUTs in the settings for individual videos. You can read more about lookup tables in the following section of this chapter.
The global color grading settings affect all types of media (i.e. images, text, graphic elements and videos) on the timeline.
All settings made here can also be saved by clicking on the button below. You then enter a name for the settings and can easily reapply these presets later in another show using the Loadbutton. This can also be helpful if you regularly use certain output devices whose color settings you want to correct.
With color grading, m.objects offers an editing technique that goes far beyond the mere correction of images and videos. With the help of so-called lookup tables, or LUTs for short, videos and photos can be changed in their color effect in a variety of ways. These changes range from the accentuation of certain colors to significant alienation. A LUT assigns changed color values to certain color values or color combinations of the original material, i.e. in simple terms, it gives instructions on how colors and color combinations are to be interpreted. A typical application example for color grading with lookup tables is feature films, where LUTs are almost always used in post-production to give the films a specific look, which we as viewers associate with certain emotions. There are countless LUTs available for this purpose, each with its own effect. A very popular effect in post-processing, for example, is a color change towards blue-green and orange, which can often be seen in current feature films.
With color grading in m.objects you can give your presentation, certain sequences from it or even individual images and videos a unique look, allowing you to specifically influence and reinforce the message and intention of your m.objects show.
The example shows a possible mode of action of a LUT: On the left is the unprocessed part of the image, on the right with the application of the LUT.
In addition to the creative, artistic use of LUTs, there are also correction LUTs. With many professional and semi-professional cameras, video sequences / images are initially recorded in a way that saves as much image information as possible. Similar to RAW images, such videos/images initially appear dull and low-contrast when unprocessed. Correction LUTs, which are precisely matched to the respective camera model, provide a remedy here. This type of application is of course also possible in m.objects.
Application of a correction LUT: the unprocessed part of the image on the left, the corrected part on the right.
m.objects always uses the so-called 3D LUTs, which are much more flexible in use than the 1D LUTs.
You can find numerous articles on the Internet for further information on the effects and operation of lookup tables. Lookup tables are generally described here in connection with videos. However, m.objects offers you the option of applying LUTs not only to videos, but also to individual images.
There are two ways to apply the LUT in m.objects: You can assign a LUT in the Properties window of the respective image or video, or you can carry out this step with theImage/Video Processing dynamic object. The second way also offers the option of influencing the efficiency of the LUT. In this case, you can also combine the application of the LUT with the other settings and correction options of the image/video processing.
As soon as you use a lookup table, m.objects offers to copy it to the project directory via the file manager. It makes sense to carry out this step so that the project directory remains complete and m.objects can continue to access the LUT when transferring the show to another computer, for example. A new folder with the name Lut is created in the directory structure of your project.
There are numerous websites on the Internet that offer lookup tables for download. A Google search for keywords such as lut and downloador a targeted search for specific providers such as ground control will find the relevant links. In addition to a wealth of free LUTs, there are also fee-based offers for special purposes.
Correction LUTs for certain cameras can usually be downloaded from the respective manufacturer's website.
m.objects processes the following file formats for
LUTs:
*.3dl, *.cube, *.dat and *.m3d
Double-click on the light curve of an image or video to open the corresponding properties window. Click on the Color grading LUT button in the top right-hand corner.
In the following window, select the folder in which you have saved the desired LUT and click on it. The lookup table is immediately applied to the image or video, so the effect is directly visible in the canvas. In this way, you can also try out different LUTs first. Then confirm with Open. The selection you have made is displayed above the button. Confirm the properties window with OK.
To deactivate a LUT again, click on Color grading LUTagain in the properties window and in the following window at the bottom left on the Do not use LUT for color grading button. This change is also immediately visible in the canvas. Then confirm the properties window again with OK.
First drag the Image/video processing dynamic object from the tool window onto the light curve of the photo or video. This opens the object's editing window. If such an object is already on the light curve, double-click on it.
Click on the Select LUT for this medium button at the top right and select the corresponding directory and the desired LUT in the following window. Confirm with Open so that the LUT is applied and its effect is visible in the canvas. The name of the activated LUT is now displayed above the button. In the previous selection window, you can later deactivate the LUT again by clicking on Do notuse LUT for color grading, as described above.
Unlike the properties window of the object, image/video editing also offers you the option of influencing the efficiency of a LUT. You can set the level directly under the Select LUT for this medium button.
The default value here is 100%. If you want to reduce the effect of the LUT, enter a corresponding value here or click on the orange arrow, hold down the mouse button and set the desired effect by dragging with the mouse. Of course, you can also follow these changes directly in the canvas.
As image/video processing is a dynamic object, you can also use a LUT here dynamically. To do this, drag another image/video processing object (or several if required) onto the light curve and then enter a different value for thelevel, i.e. the efficiency of the LUT. As with other dynamic effects, m.objects creates an animation between the objects so that, for example, the effect of a LUT is slowly faded in fromlevel 0% to level 100%. Please note, however, that you can only assign one specific LUT to severalimage/video processing objects within a light curve.
In m.objects, there are a number of wizards that perform complex tasks with just a few mouse clicks that would otherwise require several steps. Using the wizards simplifies and speeds up the production of an AV show and in many cases saves the time-consuming repetition of the same work steps.
You can subsequently change and customize all actions that you perform using the wizards so that you always retain control over what is happening in your production. You can find the wizards in the Edit / Wizards menu. The only exception to this is the Guideline wizard, which you call up via the context menu of the canvas.
Alternatively, you can also call up the wizards via the context menu by right-clicking on a selected object, for example on the bar under the light curve of an image.
You can use a wizard to edit a single object (e.g. an image), several objects simultaneously or the entire show. Therefore, the selected area is displayed in all m.objects wizards (except in Auto-Show). For example, if you have selected several images and then called up the wizard, the selected area extends from the first to the last selected image.
Below this, you will also find the option Only apply to selected objects in some wizards. It is activated by default, which is important if not all objects in the selected area are selected, for example if one or more images have been omitted. The wizard is then not applied to these unselected objects. If the option is deactivated, however, it is always applied to all objects in the selected area. If you call up the wizard without first selecting an object, it will apply to the entire show.
This wizard makes it easy to synchronize images to the music in your AV show. To do this, you first create markers on the timeline in time with the music, which the wizard then uses to align the fade-ins and fade-outs of the images.
First start the locator at the desired point in the show and then press the Del key on the keyboard to match the beat of the music. Each time you press the key, a so-called single marker is created on the timeline, which runs as a vertical line across the video and audio tracks.
If some individual markers do not yet match the beat of the music exactly, you can simply move them on the timeline with the mouse, using the dynamic curve of the music in the audio track as a guide. In this way, you can subsequently optimize the positioning of the markers.
T he use of individual markers is particularly useful if you want to create 'hard' image changes, i.e. without fading from one image to another. If, on the other hand, you want to work with transitions, the so-called range markers are helpful. To create these, also start the locator and now press the Enter key at the desired start of the crossfade, hold it down and then release it again when the crossfade is finished. Markers appear on the timeline again, but these are now connected with a blue line. This is the area in which the crossfade should take place.
You can also move the range markers on the timeline to correct their position.
Before you call up the wizard viaEdit / Wizard / Synchronize images to timestamps, make sure that the created markers are selected on the timeline.
At the bottom of the window, you will find an input field in which you can enter a value for the fade time. This entry only applies to single marks; for area marks, the fade time is determined by the marked area. So if you are using single marks, enter the value 0 here for hard image changes. Now confirm the window with OK.
The wizard now aligns the fade-ins and fade-outs precisely to the individual marks. If you enter 1.00 s instead of0.00 s, for example, the wizard positions the fades so that two thirds of the fade takes place before the individual marker and the last third after it. The crossfade is already clearly visible at the individual marker.
If you use range markers, the input for the crossfade is therefore irrelevant. You simply confirm the window with OK and the wizard aligns the transitions with the markers you have created.
The Do not move the following objects if possible option ensures that the following images on the image tracks are not moved. In this case, the wizard adjusts the time of the last image in the selected area accordingly. However, shifting cannot be avoided if the selected images on the tracks are significantly shifted backwards as a result of the synchronization, so that the assistant must also shift the following images.
When working on a show, it can quickly happen that the fade-ins and fade-outs or cuts of individually stored or shifted images have a slight, unwanted offset in relation to the preceding and following images, as the example shows.
It can also happen that a handle for an image that is actually fully exposed accidentally slips below the 100 percent line during editing.
The Align fade-in / fade-out wizard can intelligently detect such inaccuracies and correct them automatically. To do this, select the area you want to check and select the wizard mentioned in the menu.
By clicking on OK, the wizard makes the necessary corrections and provides corresponding feedback.
However, if the deviations exceed a certain level, i.e. if there is a very clear offset between the fade-ins and fade-outs or cuts, or if a frame deviates significantly from the 100 percent line, the assistant does not carry out a correction. In such cases, it assumes that these deviations are intentional, so that asynchronously set f-stops or images that are not fully exposed remain untouched.
The Timing compression/stretch wizard can be used to scale the timing of image sequences, entire presentations or individual animations and effects. You can also change the fade and hold times.
If you want to extend or shorten the duration of an image sequence in your presentation, first position the locator where you want the image sequence to end in the future. Then mark the sequence by dragging a frame with the left mouse button. It is also sufficient for this function to select only the first and last image. Then select the menu item Edit / Assistants / Compress/stretch or standardize timing and tick the option Compress/stretch range. Click on the Locatorbutton at the bottom right to adopt the new end point of the image sequence from the position of the locator and confirm with OK.
Alternatively, you can also set the compression or stretching of the image sequence manually using the slider or enter the end of the image sequence numerically as a time value underEnd of area after.
In contrast to the first example, in this case you not only influence the objects in the image tracks, but also adjust an entire area of your presentation. To do this, select the area on the timeline, select the Edit / Assistants / Compress/stretch or standardize timing function and tick the Compress/stretch area box. Now make sure that you select all points in the list of affected components in order to adjust any existing audio arrangement, waiting times and index marks on the time ruler or comments. Then move the slider or enter the desired end time manually. It is of course also possible to adopt the locator position as in the first example. Confirm withOK.
Please note that cuts in sound samples within the scaled range may need to be revised. After stretching, it can of course also happen that sound samples are too short. You must make the corresponding corrections manually.
If you do not want to scale a selected area proportionally, you can enter fixed values for the fade and stand times here instead. To do this, mark the area as described above and select the wizard.
Now place a tick next toStandardize downtimes and / or Standardize fade-in / fade-out times. Then enter the desired values for the new stand times or for the new fade times in the input fields below. If you also select theInclude video clips option, videos in the selection will also be adjusted accordingly. Finally, confirm with OK.
The additional entry of a new end point for the selected area is of course no longer possible here, as this results from the idle times or fade-in and fade-out times. For this reason, theLocator button cannot be selected in this case.
This function is most effective and easiest to implement in simple image sequences, but in complex arrangements on the timeline it can lead to conflicts with existing animations, for example. If necessary, m.objects will then deviate from the specifications made and carry out the changes in an adapted form. In any case, you should check existing animations after the change and correct them if necessary.
If you activate the expert modehere by ticking the corresponding option, the time scaling of image sequences can be carried out without influencing the following objects and differentiated for fade-in and fade-out times.
The Compress/extend or standardize timing wizard normally ensures that subsequent areas of the production are automatically adjusted to the timing changed according to the specifications. In special cases, however, it may make sense to carry out scaling without shifting subsequent areas. In this case, remove the checkmark next to the Track subsequent objects option. In this mode, the linked treatment of the selected images as an image sequence can also be switched off so that each object retains its starting position and is modified separately. In addition, you can also preselect a specific influence on the fade-in or fade-out time only.
When you start producing an m.objects show, one of the first things you do is set a suitable aspect ratio for the m.objects canvas. If images in your show deviate from this aspect ratio, they will be displayed with black bars at the top and bottom or right and left on the canvas. Using the wizard, you can now enlarge the images so that they completely fill the canvas.
The same also applies to images that are displayed in the canvas at a reduced size using an image field. Here too, you can use theAdjust aspect ratio wizard to ensure that the image completely fills its image field.
The Adjust aspect ratio wizard performs this action automatically. Open the wizard via the Edit / Wizards menu.
If you now confirm with OK, it places a zoom object on each of the selected images, which is automatically set to the appropriate zoom factor. If there is already a single zoom object on an image, the wizard adjusts this object accordingly. The automatic image field option is also selected here by default. This means that m.objects sets a zoom factor of 100% in relation to the canvas or image field. In this case, 100% means that the image completely fills the canvas or image field.
Instead of zoom objects, image field objects can also be used to adapt to the aspect ratio of the canvas. To do this, simply activate the corresponding option in the wizard. The wizard then places a correspondingly set image field on the light curve: The image field is enlarged beyond the edges of the canvas until the image fills the canvas. The wizard also adjusts any existing single image field accordingly.
When using image fields, it is also possible to adjust the image to the canvas by distorting the image. To do this, select the corresponding option in the wizard. The advantage of this is that the image is not 'cropped', which cannot be avoided when adjusting without distortion. If the image format deviates slightly from the canvas format, such distortion can be useful as it is not perceived by the viewer, whereas if the deviation is greater, the distortion is usually too strong.
Below this, you will find the option Automatically adjust any existing dynamic objects. If you have animated an image, for example with a zoom move, there are two or more dynamic objects on the light curve, i.e. zoom, image field, rotation or 3D objects. By selecting this option, the wizard ensures that the animation is adjusted to the changed representation of the image. Fine-tuning the dynamic objects can still be useful.
The Ken Burns Assistant enables the automated creation of uniform or varying animations of images and videos.
To make an image sequence more dynamic, more or less clearly perceived animations such as zoom-in, zoom-out and/or panning, also known asKen Burns effects, can be useful. Their creation is sometimes time-consuming, especially with a large number of images or short video clips, and the integration of a random component, i.e. a certain variation in the direction and strength of the animation, requires manual work. The Animation Wizard (Ken-Burns) takes care of such work with just a few mouse clicks.
In addition to a range of different presets, the Ken Burns Assistant also offers manual setting options. It is a good idea to experiment intensively with the various options and test the effects on images and videos. If the result does not meet your expectations, call up the assistant again and try other settings until you see the desired result on the screen. Of course, you can also undo the editing at any time with Ctrl + Z.
To use the wizard, first select the images/videos of the sequence that you want to animate. Then open the wizard via the menuEdit / Wizard / Animation (Ken Burns) or via the key combination Alt + 6.
In the wizard form, you will see the Selectable presets option at the top. Click on it to open a list with a range of different presets that you can use to create the animation and modify if necessary.
At the top of the list, you will find presets labeled Cutout ..., which you can use to create pans through the images/videos and also specify the direction of the movement. This is followed by presets for zoom in and zoom out, which you can use to zoom in and out of the images/videos.
If you have selected one of the presets, m.objects will automatically enter the other values in the form. As soon as you click OK, the program inserts two zoom objects for each image or video. The first zoom object marks the start and the second the end of the animation. If you want to work with image field objects instead of zoom objects, select the corresponding option at the bottom of the wizard. The use of image field objects can be useful, for example, if you want to achieve a straight line of the animation when panning horizontally or vertically through images or videos with different zoom values. In this case, zoom objects create a sweeping, curve-like course of the movement.
You will also find some presets with variable values in the list. The Ken Burns assistant randomly determines the corresponding values within certain limits. In this way, you can create different variations of the movement sequences, especially for longer sequences.
If you want to work with manual values instead of one of the default settings or modify the default settings, use the input fields below the default settings.
Use the Zoomoption to specify how much the images or videos are enlarged or reduced in the canvas. Value 1 andvalue 2 are assigned to the two zoom or image field objects on each image/video of the animation.
If you enter the same value in both fields, for example 150%, you can ensure that the respective image/video is enlarged to such an extent that a subtle pan is created.
If you enter two different values, for example value 1: 100% and value 2: 200%, you create a zoom into or out of the image/video. In this case, you can also enter a value for the scattering. The Ken Burns Assistant then randomly sets the zoom value differently in each image/video. It proceeds in such a way that the smaller of the two values remains unchanged, while the larger is set to a value between the two specifications. In the example, this would mean that value 1remains unchanged at 100% and value 2 is set between 100% and 200%.
In the zoom sequence, you can define the assignment of the two zoom values to the first and second zoom or image field object. If you select the option 2 > 1, the animation runs from value 2 in the direction of value 1.Value 2 is then assigned to the first and value 1 to the second zoom or image field object. Of particular interest here is the random option, with which the Ken Burns wizard generates the assignment at random. In an animated image sequence, you can achieve the effect that zoom-in and zoom-out alternate in random order.
For Direction, use the options from left > right to down > up to specify the direction in which horizontal or vertical panning should take place.
To zoom into or out of an image/video, select thedefined reference point option. This determines whether the zoom movement is centered in or out of the subject or whether the zoom movement moves to an area outside the center. You define the desired reference point with thehorizontal and vertical values. For an exactly centered movement, both values must be set to 50%, which corresponds to the center of the screen.
Here, too, you can enter a value for the scattering, which the wizard uses to randomly create the reference point differently for each image/video. With 100% scattering, the wizard can set the reference point at any point on the screen. With a value below 100%, the scattering deviates less from the specified horizontal and vertical value.
The option Identical reference point at start and finish is preselected by default. If you deselect this option, the wizard creates different reference points for the start and end of the zoom movement, so that the image/video is also panned.
If you use videos in your AV production, m.objects offers you convenient options for editing the sound separately, for example cutting the sound, changing the volume or applying sound effects. The prerequisite for this is that a video is not only stored on an image track, but also on a sound track. m.objects displays the volume envelope of the video sound there and provides all the editing functions that you also have for all other sound samples. It is important for the exact synchronization between image and sound that the video is positioned on the tracks at exactly the same time. In addition, the video must be muted in the picture track so that only the edited sound from the sound track can be heard. The Separate video sound on sound track wizard takes care of all this work for you and can also convert the sound to MP3 format if required. If you have not already done so, save a video on an image track.
Right-click on the bar below the light curve. Now select the wizard in the context menu.
The option Convert separated sound to MP3 file is already preselected. Below this, you will find the option Group handles for image and sound automatically. This groups the video in the image track with the video sound in the sound track. If you move the video on the picture track later, it will also be moved in the sound track, so the synchronicity is maintained. Within this grouping, however, both the video and its sound can be moved to other tracks if required. Of course, the synchronicity is also retained here.
The grouping can be subsequently removed again via Edit / Split event group(s).
Confirm the wizard withOK. m.objects will now integrate the sound from the video into an audio track. If several audio tracks are available, m.objects will use the lowest one on which there is enough space. If necessary, a further audio track will be created.
The sound from the video is now available separately on an audio track and can be further edited there. Alternatively, you can also call up the wizard from the video editing window. To do this, double-click on the light curve.
Of particular interest in connection with the video assistant is the option of setting a start offset for the video here, which means that the video does not start at the beginning, but at a different time that can be set as required. You can now open the video assistant, which adopts the settings for the start offset, by clicking on the Transfer soundtrack to digital audio button. The video sound will then also start in the soundtrack with the preselected start value.
In most cases, you will not use full-length videos in your multivision, but rather shorten them to a specific section. As m.objects works non-destructively, the uncut original video is initially retained in the project directory, while only a short part of it is used on the image tracks. This can lead to a considerable volume of data, especially with longer videos. For this reason, the program offers you the option of trimming one or more videos to the required length, including a cutting reserve at the beginning and end, and saving them as new file(s) using theTrim video files losslessly wizard. The key point here is that this trimming is done without recompression - i.e. without any loss of quality compared to the original. This procedure works with almost any encoded video files.
To do this, select one or more trimmed videos on the image tracks and select Edit / Wizards / Trim video files losslessly (shorten) in the program menu. If you want to trim all the videos in your presentation, you can also call up the wizard without making a prior selection.
In the following window, you will see the target directory for the trimmed videos. By default, m.objects creates the subdirectory trimmed in the video folder of your project. It is recommended that you continue to work with this default; alternatively, you can of course specify a different directory here.
Below this, you will find the input options for an editing reserve. Enter here how many additional seconds should be saved from the original before the start and after the end of the edited video. Such a reserve is useful to be able to make changes to the presentation later.
At the very bottom of the wizard's editing window, you will find the option to
Automatically replace timeline with trimmed videos.
If the check mark is set here, m.objects will then replace the trimmed videos with the trimmed versions with frame accuracy.
Confirm with OK to start trimming the videos. After m.objects has completed the process, you still have the option of displaying the log.
The log lists the processed videos and indicates the image track on which they are stored and the time at which they start. Both the original videos and the trimmed videos are displayed with their file path. The names of the trimmed videos are given the suffix _trim. This is followed by information in milliseconds about the start and end of the video excerpt from the original, including the editing reserve. As a rule, the value for the start deviates slightly from the specified trim reserve. The reason for this is that m.objects has to extend the editing reserve up to one keyframe.
The log is also saved as a text file in the trimmed folder so that you can access it later.
Videos that have already been trimmed can also be cut and trimmed again on the picture tracks, of course without any loss of quality compared to the original.
Once you have trimmed the videos, you can delete the corresponding originals from the video directory of your project and thus significantly reduce the data volume. Of course, it is important that data backups of the original videos - as with all images, videos and sounds - exist elsewhere.
From m.objects live
You can use the Stabilize or reverse video files wizard to stabilize shaky video clips or reverse the running direction of videos. You can find the wizard in the Edit → Wizard menu.
After calling up the wizard, first select the Stabilize video option. The wizard can be applied to a single video or optionally to a selection of several video clips at the same time. Due to the high computing effort required for this process, it does not take place in real time. The original video remains untouched, m.objects saves the stabilized result in the stabilizedfolder within the video subdirectory of the current production.
A major advantage of using this function directly from m.objects is the fact that the selected video clips can also be trimmed losslessly to the section actually used on the timeline during stabilization (see also chapter Frame-accurate and lossless video trimming from p. 191). To do this, select the option Only process the section used on the timeline. This means that the process does not take unnecessarily long and the zoom required for stabilization is reduced to a level that corresponds to the unwanted camera movements within this section.
The smoothing of the movements can be set in the form of a time window between 0.2 and 10 seconds. This makes it possible to either achieve a very smooth movement (higher value) or to filter out only very fast wobbles (lower value). In many cases, freehand video recordings are intended to convey an action character, so a certain amount of unsteadiness in the image is definitely desired here. You can also optionally allow the content to be rotated to compensate for accidental tilting of the camera. If you also activate the dynamic zoom, the necessary zoom is slowly reduced in areas where the recording is less shaky.
If the option Automatically determine export settings for each clip is selected, the result of the stabilization is encoded and written with parameters such as codec,bit rate etc. as they exist in the source video. If this option is deactivated, the form for video export in H.264/H.265 format appears. Certain settings such as container type,compression and bit rate are possible here , see also chapter Export in H.264 / H.265 format from p. 301.
All selected video clips are then written with these settings. It does not make sense to influence theresolution, aspect ratio and frame rate here, so these input fields are deactivated.
If a stabilized video has clearly visible compression artifacts compared to the original, please deactivate the option Automatically determine export settings for each clip and click on Standard in the video export form to achieve an appropriate compression.
Note: The compensation of fast camera movements in recordings from cameras with rolling shutter (line-by-line readout of the sensor) can lead to geometric distortions depending on the subject, exposure time and readout interval. Although these are already present in the original video, they become more or less noticeable in the stabilized version in the form of "wobbling" of parts of the image. The remedy for this effect is to record with a shorter shutter time (e.g. also a higher frame rate) or, of course, to record with a camera with a global shutter.
To reverse the playback direction of a video or a selection of several videos, select the corresponding Reverse video option in the wizard.
For technical reasons, this process, like the stabilization of videos, cannot be carried out in real time during playback. Instead, new video clips are derived from the selected video clips, which are rebuilt backwards. The original videos remain unchanged.
Depending on the duration, resolution and frame rate of the video, this process can be very memory-intensive and can lead to errors if the available main memory is too small (e.g. less than 16 GB) and the attempt is made to reverse passages that are too long. As with video stabilization, it is therefore recommended that you first cut out the relevant sections from long videos on the timeline and activate the optionProcess section only used on timeline. By deactivating the option Automatically determine export settings for each clip, the encoding of the result can also be specified individually.
The resulting reversed video clips are saved by default in the Video\reversedsubdirectory of the current project and the original clips are immediately replaced by them on the timeline. As m.objects also works non-destructively here, you can simply restore the previous state using the Undo function.
Waiting marks are part of Speaker Support, which is available in the m.objects live, creative and ultimate expansion levels or in the earlier plus and pro license forms. Speaker Support and therefore this assistant cannot be used with m.objects basic.
You will find a detailed description of how to use waiting labels.
This wizard inserts wait marks into the timeline and offers the option of adjusting the stand times as well as the fade-in and fade-out times of the images at the wait marks. This wizard saves you a lot of work, especially if you need wait marks at many points in your presentation.
Select one or more images in your presentation and open the wizard viaEdit / Wizards / Insert wait marks and adjust timing.
The Standardize wait times option is already selected and set to 0.40 seconds. With this setting, the wizard ensures that the stand times of the relevant images immediately before and after the wait mark are kept very short.
You can control the actual duration of the image on the screen or monitor individually using the wait marker. You can read more about using wait marks and Speaker Support in the Speaker Support chapter.
The second option in the wizard offers the possibility of standardizing the fade-in and fade-out times of the images that are provided with wait marks. This option is particularly useful for a longer image sequence and ensures a uniform fade between the individual images.
Here, too, it is of course possible to replace the default value of 2.00 s with your own value.
In addition, the option Group handles and wait marks automatically is selected by default. The wait marks and associated images are thus combined into an event group so that their alignment to each other is always retained, for example when moving objects on the timeline. The grouping can be subsequently removed via Edit / Split event group(s).
At the bottom of the form, you will find the Use auto duckingoption , which automatically lowers the volume at wait marks. Use theDucking settings button to access the special auto ducking settings. You can find a description of this in the chapter Auto ducking at wait marks.
The Autoshow assistant is used to copy objects multiple times within a presentation, whereby you can freely select the number of copies. The assistant can be used on any objects, for example dynamic objects (image fields, zoom objects, rotation and 3D objects), wait marks, single and range marks, sound samples or even entire light curves.
Select the objects you want to copy and select the wizard via the Edit / Wizards / Autoshow, multiple copy of objects menu. Enter the desired number of copies and confirm with OK.
The copies are then 'attached' to the mouse pointer. You can now place them in the desired position by clicking with the mouse. The specified number of copies will be inserted there.
You can find a practical application example for the Autoshow assistant in the Rotation object chapter.
You can arrange image elements in the m.objects canvas easily and precisely using the Guideline Wizard. To do this, right-click in the canvas and selectGuides → Guides Wizard in the context menu.
You will find a selection of presets for dividing the screen under Presets. Here you can select a matrix of rows and columns, optionally also in the golden ratio or according to the rule of thirds, or preselect a specific number of rows or columns. You can also make individual entries in the field below. You also have the option of entering values for the distance to the edge of the canvas. As the entries can be made either as a percentage of the canvas width or height or as pixels, you can easily create borders for text positioning or grids with a surrounding border for positioning image content. If required, the wizard can retain or replace existing vertical and/or horizontal guides
As image fields snap magnetically to the guides depending on the setting, you can use the wizard to create an exact layout for the content of the canvas very easily. In conjunction with the entry in the context menu (right-click in the canvas) Guides → Save guides, you can also save such a layout as a template for further use and call it up again later via Guides → Load guides.
With m.objects you can arrange, animate and present digital stereoscopic images and video sequences on the timeline. You have all the editing options at your disposal that you can also use for two-dimensional presentations. Stereoscopy is part of the m.objects creative and ultimate expansion stages.
While the spatial arrangement of objects in a two-dimensional presentation is ultimately always depicted in two dimensions and therefore the depth cannot be perceived directly, stereoscopy means true spatial perception, i.e. the depiction of three dimensions. To achieve this, a number of requirements must first be met.
Positioning in depth and 3D animations can be carried out within m.objects with monoscopic and stereoscopic images and videos, even mixed together as desired. For the spatial representation of the image content itself, however, the source material must be available in stereoscopic form, i.e. as right and left partial images that have been created with special 3D cameras or special recording techniques and then prepared for output in suitable software. It does not matter whether the partial images are available as separate files, as a Multi Picture Object (MPO) or multistream video, or ready-mounted next to or on top of each other. The output itself takes place via suitable devices such as 3D-capable monitors, TV sets or projectors. Almost all common display technologies are suitable, such as line-by-line polarized displays, various shutter-based devices or mirror boxes (cobox, planar), which consist of two screens and a semi-transparent mirror positioned at the appropriate angle between them. The right or left partial image is displayed on the two screens, while the three-dimensional image is then displayed on the mirror. 3D glasses are required for most forms of stereoscopic display.
With m.objects, stereoscopic image and video material can now be arranged on the image tracks in the usual way and presented with the output devices mentioned. As in the two-dimensional presentation, the software ensures smooth, jerk-free transitions and motion sequences. Above all, however, you can use m.objects to enrich stereoscopic presentations with animations, use mask effects or insert additional graphic elements and texts and adapt them precisely to the spatial orientation of your images.
To be able to insert stereoscopic image and video material into the m.objects image tracks, you must first make a few settings. To do this, right-click in the open m.objects canvas and select the canvas settings in the context menu.
Now click on theStereoscopy tab.
Place a tick in front of Activate stereoscopic mode. Directly below this, you will find the itemInput, which deals with the image material that you use in your show. Next to File name feature, enter the name extensions for the left and right drawing files, if these are available as separate files. Which name extensions you use here is ultimately irrelevant for m.objects; _l and _rhave become established as standard.
When using pre-assembled stereo images in a single file, specify in the line below how m.objects should read them in, i.e. how the right and left partial images are arranged in relation to each other. The left partial image can therefore be created on the left, right, above or below the right image.
If the arrangement in individual images differs from this, you will also find this information in the editing window (double-click in the light curve) of the respective image. Here you can now select a different option, which is then only valid for this specific image.
Please note that m.objects only recognizes pre-assembled stereo images (side-by-side or over-and-under) correctly if they are either named _sor _cs before the file name extension, or if the file name extension is .jpgor .mpo. A file with the name Blüte4_s.jpgis therefore automatically interpreted by m.objects in the mode set differently for the show or for the respective image.
Within a show, videos and stereo images with different structures can be used together and mixed with monoscopic material as required.
The settings made in the screen settings under Stereoscopy / Input apply to the entire current production and may therefore differ from show to show.
Next, specify the type of output device(s) you are using in the stereoscopic presets. The option in full screen on two separate outputs is selected here by default. This option applies if you are presenting with two digital projectors or a mirror box (RBT, Cobox, Planar etc.).
When using a mirror box, also click on the option Mirror right image horizontally in full screen mode.
If you notice that the right and left outputs of the graphics card have been swapped when connecting the monitors of a mirror box or two projectors, meaning that the stereo effect no longer works, simply select the Swap left/right assignment option and the display will be correct again.
If you have set up a 'stretched desktop' in order to control two projectors or a mirror box via two outputs of your graphics hardware - possible, for example, with Nvidia hardware under Windows XP or with Matrox DualHead2Go / TripleHead2Go - select the option In full screen side by side. In the case of m.objects, however, using the extended desktop (see chapter Setting up for the presentation) has no disadvantages compared to the stretched desktop with NVidia graphics cards; on the contrary, the handling of the desktop is even more comfortable. It is therefore not worth switching to the older Windows operating system for the stereo display.
For presentation on a 3D monitor (e.g. from Zalman or Fujitsu), select the option Interlaced output (interlaced line by line). The sequence of the left and right partial image may deviate from the standard on some models. If you notice that the spatial depth is displayed incorrectly (inversely) on the m.objects canvas when using a 3D monitor, also select the Swap left/right mapping option in the options, which corrects the sequence of the partial images accordingly and makes the spatial effect visible as desired.
By default, only the left partial image is displayed on the screen if it is not in full screen mode. A stereoscopic display in a reduced window would also make little sense when presenting with a mirror box or with two digital projectors. However, this does not apply to presentation on 3D monitors (interlaced output). In this case, stereoscopic viewing is also possible on a smaller screen, which means that the stereo display is also retained in window mode.
The option In window side by side full width enables stereoscopic viewing in parallel or cross view if the screen is not set to full screen mode. The two partial images are arranged accordingly by setting or removing the checkmark next to Swap left/right assignment.
Some TV sets and digital projectors require an input signal in FullHD resolution, in which the left and right partial images are compressed next to each other or on top of each other. Both partial images are then stretched back to the correct size by the output device and output in interlaced or shutter mode (depending on the model). This reduces the horizontal or vertical resolution by half, although this is practically imperceptible if certain viewing distances are maintained.
When controlling a 3D cinema projector or a projector or TV set with shutter glasses via a single HDMI or DVI port, it is ultimately irrelevant for the resulting overall resolution whether the images are arranged side by side or one above the other in the so-called frame-compatible signal. It is therefore necessary to use either the settingin full screen side by side - half width or in full screen on top of each other - half height and to set the display device accordingly.
When using a line-by-line interlaced 3D TV set (e.g. LG Cinema3D), however, it is important with regard to the overall resolution that the output is in full screen mode, as the display technology in 3D mode can display the full horizontal resolution of 1920 pixels, but halved vertical resolution of 540 pixels. Playing a side-by-side signal(full screen side-by-side - half width) would result in an unnecessary loss of horizontal resolution and therefore a less sharp display.
To achieve maximum sharpness in the display, we also recommend deactivating the display device's overscan function, which is activated ex works on many TVs and leads to a digital zooming of the picture with a loss of image components and sharpness. In the case of LG devices, the option to be selected is called JustScan, in other devices it may be labeled as HDMI overscan: off, display: direct,display: 1:1 or similar. Incidentally, this note applies to both stereoscopic and monoscopic display.
For stereoscopic display on non-3D-capable output devices - i.e. on 2D monitors or 2D individual projectors - m.objects can also display presentations using the anaglyph method. The prerequisite for viewing is, of course, appropriate red/green or red/cyan glasses. In the Anaglyph colorless or Anaglyph color-optimized options. In the first case, you will receive a 3D display in greyscale; in the second case, the colors of the images are retained as far as is possible for a good separation of the right and left stereo channels.
Once you have made the required entries, confirm the dialog box with OK.
Use the lightbox, the red dot or the File Explorer to insert your images (see also chapterImages in the m.objects show), whereby you only ever place one of the two stereoscopic partial images on the image tracks. The corresponding second image is automatically output simultaneously by m.objects during the presentation. You can either place the right or left partial images on the image tracks or even mix the two, i.e. the right partial image of one subject and the left partial image of another. The software finds the corresponding counterpart using the specified name extension and assigns the partial images to the correct channel. m.objects interprets images without a corresponding name extension as two-dimensional and they are automatically output on both channels.
Stereoscopic video sequences are inserted into the image tracks in the same way. The procedure described below for creating animations also applies to videos. Please note, however, that in stereoscopy each video sequence is played back in two versions (right and left partial video). If you cross-fade between two video sequences, four videos are output simultaneously. The PC must be correspondingly powerful to ensure smooth playback. Even smaller quad-core processors reach their limits when playing several FullHD videos at the same time.
As with stereoscopic images, m.objects can also process mounted stereo videos (side-by-side or over/under). These can be compressed, i.e. left and right partial video in half resolution, or 1:1, i.e. both partial videos in full resolution.
Further information on special functions such as video editing or the use of video codecs can be found in the Video chapter.
The use of m.objects dynamic objects in stereoscopy offers a lot of exciting possibilities. At the same time, however, the third dimension also brings with it some special features compared to the use in 2D presentations that need to be taken into account. You can find out more about the basic handling of dynamic objects in the chapterDynamic objects.
The 3D object is the most common means of choice for dynamic effects in stereoscopy when it comes to positioning in depth (Z axis), usually in conjunction with the image field object. Here are a few examples to illustrate this.
First open a stereoscopic presentation in m.objects or place some stereoscopic images on the image tracks as described above. If necessary, save an already finished show under a new name. Now drag a 3D object from the tool window onto one of the light curves. Double-click on the orange square to open the 3D editing window.
In the bottom left-hand area of this window, you will find the Distance parameter with the default value 100%. If you click with the left mouse button on the orange arrow next to it, hold down the mouse button and drag upwards, you will see that the stereo image becomes smaller and moves backwards. By default, the illusory window is at 100%, so the entire image moves backwards as the numerical values increase. Technically speaking, the distance between the right and left sub-images increases, with the right sub-image moving to the right and the left to the left. You can observe this clearly if you look at the monitor or screen without 3D glasses when changing the distance value.
If you now also drag an image field object onto the light curve and then select the green square on the light curve, you can move the position of the image on the m.objects canvas as required. The mouse pointer becomes a quadruple arrow.
If you reduce the value for the distance in the 3D editing window, the image moves closer to the viewer. The two partial images move in the opposite direction. It should of course be noted here that the simultaneous enlargement of the object in the correct perspective can cause damage to the illusion window if the image field used is too large.
For comparison, apply the zoom object to another image. Drag a zoom object from the tool window onto the light curve and then double-click on the blue square.
In the following window, reduce the zoom value at the top so that the image on the m.objects canvas becomes smaller. In contrast to the 3D object, the spatial position of the image itself does not change. The entire scene is simply displayed smaller. There is therefore no shift in relation to the illusory window. Such a reduction in size effect can of course be desired, and with an additional image field object you can also position the image anywhere on the screen. However, the zoom object does not have a real stereoscopic effect.
Of course, this also applies to increasing the zoom value. Here you zoom into the image, i.e. select a section of the image, but the spatial effect remains unchanged. You can also see this from the fact that the distance between the left and right sections of the image always remains the same in relative terms, regardless of where you move the zoom value. Depending on the desired effect in the stereoscopic presentation, you can therefore use the zoom object (zoom in/out) or the 3D object (zoom in or out).
Another application of the 3D object is when integrating graphic elements into the presentation or when using the m.objects internal title generator. To do this, create a short text in an empty image track above an image: Right-click in the image track,insert text element and enter the text in the following window. First set the size and position of the text using an image field object. Then add another 3D object and change the value for the distance. If this is 100%, the text lies on the plane of the illusory window (provided this has not been changed beforehand). If the value is higher, the text moves spatially into the scene; if you reduce the distance, it moves in front of the plane of the illusory window, i.e. spatially out of the picture.
Also change the values for the rotation angle.
If you change the Y value, the text is positioned spatially in the scenery, for example, protruding from the front through the dummy window into the background. The other rotation angle values can be used to change the spatial alignment of the text even further. Changing the value for the viewing angle increases or decreases the spatial effects. With a little trial and error, you will quickly get a feel for the effects you can achieve and the limits to which they can be used.
Only a few steps are required to turn the static effects into animations. The principle is that you use two or more dynamic objects in sequence for an animation and change the values there according to the desired animation. For example, if you want to animate a text, place a 3D object with the standard values at the beginning of its light curve and another one with a different distance and Y value at the end of the light curve. The greater the distance between the two objects, the longer the animation will take. The animation can be refined with additional 3D objects or you can insert additional intermediate steps.
In addition to the combination with the 3D object, the image field object also has its own stereoscopic effect: it allows the position of the image on the Z axis to be shifted without changing the image size. This is particularly useful for setting the final displayed size and depth of an object separately without the values influencing each other.
First place an image field object on the light curve of an image and then double-click on it to open the editing window.
You will get the best impression of the stereoscopic effect if you reduce the image field slightly. This change can be achieved quickly using the sliders with the double arrows. To do this, click on the arrow and then hold down the left mouse button and drag in one of the arrow directions. Then click and drag to change the value for the stereo level. You will see that the image moves forwards or backwards on the spatial axis, i.e. the Z-axis. The smaller the value for the stereo plane, the closer the image moves to the viewer. You can of course also enter a numerical value in the input field.
In contrast to the 3D object, this procedure does not change the size of the image.
You can create a tracking shot through a stereoscopic image using 3D objects. In a two-dimensional presentation, the zoom object is usually used here, but as described above, it has no stereoscopic effect and is therefore replaced here by the 3D object. Here is a simple example, which you can of course change and expand as you wish:
Drag another 3D object from the tool window onto the start of a light curve.
In the editing window of the 3D object (double-click on the orange square), first reduce the distance by holding down the mouse button and dragging the mouse pointer downwards over the arrow. Alternatively, you can also enter a percentage value in the input field next to the arrow.
Now select a section of the image in this way. On the m.objects canvas, you can follow the change in distance, which is immediately displayed here. The next step is to add an image field object and place it on the light curve exactly under the 3D object. Click on the green square so that a pink frame is displayed on the canvas. Within the frame, the mouse pointer becomes a quadruple arrow, as described, which you can use to position the image section precisely. Select the position so that the left part of the image is displayed.
To pan through the image, simply insert a second image field object at the end of the light curve, in which the right-hand part of the image is displayed.
Test the animation and correct the entered values, if necessary, until the pan is as you want it. If you insert another 3D object above the second image field and further reduce the distance here, the camera pans into the scene at the same time.
With additional 3D objects and image fields, you can, for example, change the panning direction during the camera pan by repositioning the section. The animation between the individual stations is created automatically by m.objects.
In the options for stereoscopy (right-click in the m.objects canvas / Canvas settings / Stereoscopy) you will see a slider with which you can change the stereo base.
This option does not refer to the stereo base of the stereoscopic images and videos themselves. This was already defined during the recording via the distance of the optics and can no longer be changed in m.objects. However, wherever you have used the 3D object or the distance parameter of the image field object and thus changed the distance, for example, a change to the stereo base will have a corresponding effect. This means you can make a targeted adjustment to the stereo base of your images or increase or decrease the spatial effect.
Changing the stereo base in the stereo options has a global effect, i.e. it affects all 3D objects in all image tracks. If you only want to make such an adjustment to a single image or 3D object instead, double-click on the light curve of the corresponding image in the image track. This opens the window for image editing.
In the picture editing options, you will first see the adjustment of the stereo base as soon as you edit a show in stereo mode.
Here too, use the slider to select the desired value. To be able to see the effect on the screen immediately, select the Total image at locator option in the bottom line. Use the Reset button to return to the initial value of the stereo base. The stereo base set in the global stereo options of the canvas is amplified or attenuated using the factor set here.
The use of masks in m.objects offers a lot of creative potential. Especially in stereoscopy, they can be used to create specific spatial effects. General information on masks can be found in theMasks chapter.
An example will illustrate the stereoscopic use of masks in m.objects. To do this, first load or create a presentation with stereo images, whereby you need at least three image tracks, and make the settings for the output device described above. In addition to images, you will also need an image mask, preferably a black rectangle on a white background, which you can create in Photoshop, for example, and save in any file format.
Now place an image in the lower image track and place the black rectangle in each of the two tracks above it. In the image properties (double-click in the light curve), select the optionImage blending / overlapping for the rectangles andImage mask, 1 image track for the upper rectangle.
The image in the middle track should serve as a frame. If necessary, enlarge it with a zoom object until it completely fills the canvas size. The image in the top track is the actual mask through which you can see the image. Use an image field object to change the size of the mask. For this example, it should be significantly smaller than the canvas.
Place a 3D object on the light curve of the mask and increase the value for the distance in the editing window (double-click on the orange square). On the canvas, you can see that the image window becomes smaller and at the same time moves backwards. If the distance is reduced, the object moves back towards the viewer; if the value is less than 100%, it moves in front of the display plane and appears to float in front of the m.objects canvas.
Now set the distance back to 100% and change the rotation angle using the orange-blue double arrow.
In this way, the window can be positioned at an angle and tilted at the same time so that it runs more or less through the monitor plane, for example. You can increase this effect by reducing the value for the image angle. To adjust the image visible in the window slightly to the window orientation, you can also drag a 3D object onto the light curve of the image and adjust the values for the rotation angle moderately in the options. Excessive changes, i.e. extreme skewing of a stereo image, are not recommended and appear unrealistic, as the viewing angle of the scene shown in the image cannot be changed subsequently.
As described in the section on camera movement, you can create animated motion sequences from static changes by adding further dynamic objects. Of course, this also applies to the use of masks. Exciting animations can also be created here.
A show can be created on the screen and played back directly from m.objects at any time, either in part or in full. Since m.objects relies on real-time rendering techniques wherever possible, there is no significant waiting time or unnecessary loss of quality. Real-time rendering means that the entire processing of image mixing, image dynamics, video integration, sound mixing and sound effects takes place during playback. In m.objects, a sophisticated system of computational load balancing ensures that each of these tasks is prepared in time to be performed at the right moment. With this advanced core, m.objects is able to deliver a constant 60 frames per second in high resolution and even split for multiple digital projectors simultaneously if required.
However, m.objects usually has to share the PC with other programs. The software is quite good-natured with regard to other running processes. However, it is easy to see that a performance-hungry process running at the same time as m.objects can jeopardize precise timing. Even a relatively short but intensive performance peak, such as communication programs for PDAs can demand from the CPU, can lead to disruptions during playback. You should therefore ensure that unnecessary programs such as the task manager, temperature monitor, Internet browser and communication programs are closed before starting playback. The screen saver should also be deactivated. Notebooks should be operated from the mains adapter and the power management for the mains adapter operating mode should be deactivated.
The advantage of presenting directly from m.objects is that the presenter has an optimal overview of the production via the display of the notebook or a screen connected to the PC. Meanwhile, the audience only sees the actual image content via the digital projector. In addition, any formatted help texts can be shown and hidden on the control display at pre-programmed times, which considerably simplifies the live presentation. m.objects ultimate can also control additional peripheral devices such as spotlights, motorized screens, slide projectors and many other devices during playback. Furthermore, dubbing with up to 16 sound channels (8 x stereo) is possible.
With m.objects, you can export the entire presentation with still images, videos and stereo sound as a compact EXE file (presentation file) or as a presentation directory. In addition to the media files, this also contains the playback software itself. This means that a presentation exported in this way can be played back on any PC or notebook with suitable hardware, even if m.objects is not installed on it. The same renderer is used here as in m.objects itself, and this therefore also delivers the same image quality.
In addition, a presentation file can be controlled in a similar way to playback directly from the m.objects timeline. It not only supports individual key assignments for remote controls and keyboards, but also other important functions of Speaker Support for the live presentation: You can also work with wait marks in the EXE file and thus comment on your show freely and for as long as you like at predefined points. In addition, the use of asynchronous sound is also supported, which is used both at wait marks - if required in the loop - and for jumps with index marks. For spontaneous comments, manual ducking to lower the volume is also available in the EXE file. Due to this wide range of functions and the lossless output quality, EXE files are the preferred insurance against technical problems on the presenter's own computer. EXE files are also frequently used in competitions.
The content of a presentation file cannot be changed afterwards, so you should always save the project with the associated mos file, images, videos and sounds. Make changes in the project itself and then create a new EXE file from there if necessary.
To create the presentation file, select File / Export presentation file (*.exe) in the m.objects menu.
Enter a name in the following window. By default, m.objects uses the name of your show (i.e. the name of the mos file). The finished presentation file is saved in the MixDown folder that m.objects created automatically during installation. You can also change this storage location here if required.
You also have the option of automatically recompressing images and sound to suit the requirements. You should select this option if you want to keep the file size small.
To create an EXE file in exactly the same quality as m.objects renders itself, uncheck Compress images and Compress sound. However, JPEG compression with a quality setting of 85 or higher is generally not noticeable in presentations, but requires far less memory for the EXE file. This also applies to MPEG-3 compression of the audio. At a data rate of 160 to 192 kbps, there is no audible loss of quality, whereas the storage size of the EXE file is significantly reduced.
If you only use images and sounds in your m.objects presentation, i.e. no videos, you create a single, compact EXE file. In this case, the option Export to presentation directory with separate video filesshould not be checked.
A presentation can of course also contain videos. In this case, m.objects creates a presentation directory during export that contains the actual EXE file and, separately, the videos and some system files. In this case, the option Export to presentation directory with separate video files is already preselected and cannot be deactivated. The only exception: If you only use videos in WMV format in your show, you can also deselect this option.
However, the advantage of exporting to a separate directory is that the memory size of the EXE file itself remains relatively small. EXE files that are larger than 2 GB are not started by the operating system, and a data volume of 2 GB is quickly reached when using videos. However, this problem does not even occur if the videos are exported separately to a presentation directory. In addition, and this is the decisive advantage, you can use videos in any format.
Please note: To play back the presentation file, you will then need the complete presentation directory with all the files it contains.
Click on the Advanced settings button to enter detailed settings for the presentation file.
Here you will first find the option "Embed assignment profile for buttons / remote control". If you activate this option, all the individual settings that you have specified for the assignment of the keyboard or the buttons of your remote control in m.objects(Settings, Buttons / Remote control) will also be transferred to the EXE file, so that you can control them in the usual way with your PC keyboard or remote control. If this option is not activated, the default settings apply to the EXE file.
Under Display, enter whether the EXE file should start in full-screen mode when called up and on which video output it should open. Full-screen mode is usually the right choice for the presentation, which is why this option is also preselected.
The preferred video outputis of interest if several output devices are used, for example when using a notebook with a connected digital projector. By default, the finished presentation file always starts via the video output on which the screen was output when it was created. If this output is not used during playback, it starts on the primary screen as defined in the Windows display properties.
Instead, you can preselect a specific video output for the playback of the EXE file in the drop-down menu. Output 1 always corresponds to the primary screen, for example the notebook monitor, while output 2 is, for example, the digital projector. If available, other video outputs can also be selected.
Even after you have started the presentation file, you can still change the video output during playback. This is important, for example, if you are showing your presentation on an external system on which, for example, the digital projector is set up as the primary screen while you have preselected output 2 for playback. In this case, press the key combination ctrl/ctrl + 1 after starting the presentation so that playback immediately switches to the projector. Similarly, ctrl/ctrl + 2 immediately switches the full-screen display to output device no. 2.
You can use call parameters to assign certain properties to an EXE file that has already been completed, including which video output it starts on. You can find out more about this in the EXE file with call parameters chapter.
An EXE file is ideal for passing on a presentation to third parties. However, if you want to prevent your presentation from being distributed uncontrollably or used for an unlimited period of time, m.objects offers effective protection options that restrict the use of your EXE file in various ways or make it subject to certain conditions.
The options for protecting the EXE file can also be found in the advanced settings, where you can choose between password protection , expiration date andplayback with license. All three options can also be combined with each other.
If you activate password protection, after confirming the form with OKand saving the file, an input window appears in which you enter the password and repeat it. When the EXE file is called up, this password is then requested and the file is only started if it has been entered correctly. The password must not contain any spaces.
Under Expiry date, enter any date using the calendar that appears. Once this date has been reached, the file can be restarted and displays a corresponding message beforehand. After this date, playback is no longer possible.
In the Playback with license option, you have the option of entering the dongle ID or - in the case of a basic license - the user name of an m.objects license.
The EXE file can then only be started if the corresponding dongle is plugged into the computer or the basic license mentioned is installed there. You can also enter a list with several dongle IDs or user names, using a new line for each license. In this way, for example, several m.objects users of a photo club can exchange presentations with each other with the certainty that playback is only possible with the listed licenses.
When outputting an EXE file, you have the option of exporting only a specific section instead of the entire timeline - i.e. the beginning to the end of the show. To do this, read the chapter Defining a time window for the export.
An EXE file that has already been completed can be modified using so-called call parameters. The parameters can be used, for example, to delay the start of the presentation or to play it on a specific output device.
To be able to use these functions, you first need a shortcut from the EXE file. To do this, right-click on the EXE icon and selectCreate shortcut. You can recognize the shortcut by the black arrow on the new icon. Right-click on the shortcut icon again and selectProperties.
Under Target, you will see the complete path to the file to which the link refers, i.e. your EXE file. Click in this field and position the cursor at the very end of the path after the file name and - if present - after the quotation mark. Each of the call parameters described below is inserted at this point if required and begins with a space. This looks like this, for example:
"C:\m.objects Data\MixDown\Australia.exe" /loop
You can also use several parameters, which are separated from each other by a space.
The following call parameters are available. It does not matter whether you choose the short or long spelling:
/d or /delay /d=30
or /1 to/7 /l or/loop /p=password or/pass=password |
10 seconds delay before the start 30
seconds delay before startup; Selection of the video output for playback of the EXE file The presentation runs in a loop, so it always starts from the beginning. Entering the password: This also allows an EXE file with password protection to be called up automatically from the Autostart folder. |
If you export an EXE file or a video from m.objects, the entire production will be exported from start to finish. Alternatively, you also have the option of defining only a limited area of the timeline for the export so that only the content within this time window is output as an EXE or video.
If the timeline is activated, you will find the Export area object in the tool window. Hold down the left mouse button and drag the export area tool to the desired position on the timeline and release the mouse button. You have now defined the start of the time window. Place another export area tool further back on the timeline to define the end of the time window.
A solid line between the two objects now marks the defined export area. Then select the option Export as video or Create presentation file (*.exe) under File to output the desired format. m.objects saves the exported file as usual in the MixDown folder.
If you have defined an export area as described above, you can cut off this section by right-clicking on one of the two marker objects or in the area between them using the Cut and select export area command and select all timeline objects within this section. You now have the option of deleting this section from the timeline or copying it to the clipboard with [ctrl] [C]. If necessary, you can then use [ctrl][Z] to restore the previous status while the previously copied section remains in the clipboard. You can then paste this elsewhere in your show.
If you have created several export areas on the time ruler, they are automatically exported as videos under output file names numbered in ascending order. To do this, select the desired option under File → Export as video (e.g. H.264 / H.265 video) and can then enter the desired base file name for the videos to be exported. m.objects will then automatically add consecutive numbers to these. The settings for the video export (container, compression) are applied uniformly for all exports.
During this batch processing, m.objects can only be operated or closed again when the last export of the last export area has begun.
For extensive projects, panorama projections or large installations, outputting the AV show via just one digital projector or monitor is often not enough. For this purpose, m.objects offers the multiscreen function, which enables the use of up to 64 output devices simultaneously.
The prerequisite for using Multiscreen and Softedge is the highest m.objects expansion level ultimate (or the older license form m.objects pro), which is capable of rendering on two output devices by default. The additional Multiscreen / Softedge module is required for each additional output device that is to be used.
There are two different procedures for multiscreen: On the one hand, m.objects offers the option of connecting several projectors or screens and outputting separate content on each of them, from one and the same timeline. In this case, a separate m.objects canvas is used for each output device. The other option is to distribute the content of a single m.objects canvas to several output devices.
If you want to show different content on several output devices with m.objects, you need a separate projection component with its own image tracks and its own virtual canvas.
Click on the gear icon in the program toolbar to open the view for setting up the components.
You have usually already created image tracks in a show and the Projection component is therefore no longer listed in the tool window. Right-click in the tool window, selectCreate object in the context menu and select theProjection option in the following window. Then confirm with OK. A new projection component now appears here, which you drag into the gray workspace where the other components used to set up the image tracks are already located. If you want to use other output devices, repeat this process as often as necessary. Click on the gear icon again to return to the normal view.
You will now see several projection components, each with their own image tracks and the corresponding screens. If one or more canvases have not yet been opened, do so now. You can now fill each component with images independently of the others and edit them as usual. You can exchange images between the individual components using the clipboard or the lightbox.
To distribute the screens to the desired output devices, open the screen settings of one of the screens (right-click in the screen /screen settings) and select theCut and Split tab. You can make all the necessary settings in the Multiscreen setuparea.
The Target option is relevant if several networked computers are used for the use of many projectors, to each of which one or more output devices are connected.
local is the
computer on which m.objects was started. As long as you control the
presentation via only one PC, you do not need to change the input.
Remote 01to
Remote 32
designate additional computers that are connected to the main
computer. m.objects can control up to 16 graphics outputs per
computer. If necessary, select the computer with the output device on
which the screen is to be displayed.
Under the Outputoption, select the output device itself and check theEnable box.
Out 01 is always the primary screen in the Windows display properties. The following outputs(Out 02, Out 03...) do not necessarily correspond to the Windows numbering, but this has no influence on the output itself. You can use the flip horiz. and flip vert. options to flip the display horizontally and/or vertically if required. As soon as you confirm with OK, the screen is displayed on the selected output device, provided it is running in full screen mode. Repeat this process for all other screens and then start the Locator. You will now see the created image sequence on each output device.
You proceed slightly differently with the second type of multiscreen if you want to distribute the content of a screen to several screens or projectors. Here you first create your presentation as usual, i.e. work with just one projection component, the associated image tracks and accordingly with a single m.objects canvas in the aspect ratio of the overall presentation. To set up the output devices, open Cut and Split in the canvas settings again. Here too, first select the PC used, if necessary, and the desired output device. Do not forget to check theEnable box.
The Viewport option now allows you to output only a section of the entire screen on the selected screen or projector in order to distribute the remaining sections to the other devices. By entering the pixels, you define the exact section of the m.objects screen - the so-called viewport.
The m.objects screen forms the coordinate system from which the distribution of the pixels on the projectors or screens results. The zero point of this coordinate system is in the top left-hand corner, the extent of the x-axis (horizontal) and the y-axis (vertical) is determined by the setting for the screen resolution. For operation with multiple output devices, you should specify the overall resolution of the presentation manually in the canvas settingsunder Real-time renderer. The Optimize for full screen option is not useful for this.
We have a screen resolution of 3150 x 1050 pixels here. This is now to be distributed - as an example - to three digital projectors, each of which has a resolution of 1400 x 1050 pixels. The projected partial images should in turn overlap over a width of 250 pixels each and ensure seamless transitions by means of softedge overlapping.
The following sketch shows the distribution of the screen to the three projectors. The shaded areas on the left and right are outside the m.objects screen. This means that no image content is displayed here, although these areas are in the projection area of the right and left projectors.
The viewport inputs for the three projectors are as follows:
Out 02
Softedge Overlap occurs automatically where two drawing files overlap due to the settings in the viewport. On the corresponding side, the partial images have a gradient outwards to black that is as wide as the overlap. As both gradients are aligned in opposite directions, they produce the full light intensity again on top of each other, the transition from one partial image to the other therefore has no hard cut edge and is practically invisible, assuming a sufficiently high contrast in the projection.
As a rule, the initially specified linear progression of the gray wedge does not yet lead to the optimum result. Depending on the projector type used, you must make the required fine adjustment using the Blending curve button. Click on the button to open thesoftedge gamma curve window.
You can see that the curve for the overlap area is initially created in a straight line. Left-click on the line to create curve points with which you can change the curve to the desired shape. Right-click to delete a curve point again. As a rule, inserting a single curve point is completely sufficient for correction. The changes affect both curves of the overlap area (in opposite directions, of course). The curve set up here compensates exactly for the specific gamma distribution of your projectors.
However, as the gamma distribution of some projectors is not completely the same for all primary colors, it is possible to set the distribution for the red, green and blue channels separately. To do this, preselect the respective channel on the right-hand side. If the All setting is then selected again under Color channel, the separate gamma curves are discarded. The gamma setting for separate color channels should be selected if the split is not consistent only for images with a certain color scheme.
In other applications, it may be useful to omit certain sections when distributing the m.objects canvas to multiple output devices. This is the case, for example, if you split the content of the canvas across several monitors that are positioned next to each other. The frames and distances between the monitors must be taken into account here. It looks unrealistic if, for example, an object moves from right to left and immediately appears on the next monitor when it 'leaves' one monitor. Instead, leave out the corresponding sections by taking the appropriate gaps into account when entering the pixels in the viewports.
Two more additions on the subject of the viewport:
If you set the m.objects canvas to window mode instead of full screen mode when using multiscreen on one of the output devices, the entire content of the m.objects canvas will be displayed here, regardless of the viewport settings.
The viewport options are also useful when using a single digital projector. You can move the entire projected image up or down by changing the upper and lower viewport values accordingly. This can be particularly useful if you want to position a projector without lens shift so that it does not block the audience's view.
Even in stereoscopic presentations, the m.objects canvas can be distributed to several output devices and softedge can be used. The procedure does not change at first: The output devices are set up under Section and Split and, if necessary, provided with the required viewport entries. When using projectors, Softedge is also used automatically in the overlapping areas.
The decisive difference lies in the output of the right and left partial images of the stereoscopic presentation. Theright stereo field option is available for this in the multiscreen setup.
Always select this option if you want the right stereo partial image to appear on the selected output device. This option remains deselected for the output of the corresponding left partial image.
A special use case is the projection of an m.objects show onto a curved surface. In the case of large panorama projections, for example, it is possible to project onto a projection surface that is curved inwards.
Here you use the Warp Setup function, which you can also find in the Canvas properties in theSection and Split tab under Targets for full-screen rendering.
Of course, several projectors can be used at the same time, whose projections create a large overall image using softedge (see above). The Warp setup is then carried out separately for each projector.
The Warp Setup option is only available in the ultimate configuration level.
Click on the Warp Setup button to open the Curve Warp Parameters window.
Here you can now see five sliders with which you can change individual parameters for the projection precisely so that an undistorted image is created on the curved surface using a suitable pre-distortion. Changing the parameters with the sliders is transferred to the projection in real time so that you can see the effect immediately and correct it until the image appears correctly on the projection surface.
- H Deg.: Curvature of the screen segment covered by the respective projector in horizontal direction in degrees (default: 0°). This parameter therefore describes the radius of the screen curvature.
- Partition: Number of columns into which the image is divided for perspective correction (default: 32). The higher the number, the more precise the correction, but the greater the computing power required.
- H Ang.: Deviation of the optical axis of the projector from the right angle to the screen in horizontal direction in degrees (standard: 0°). The 'viewing angle' of the projector, i.e. its alignment to the projection surface.
- V Offs.: Vertical offset of the projection image in relation to the projection axis, corresponds to the vertical lens shift of the projector.
- Distance: Throw Ratio of the projector, 100% corresponds to Throw Ratio 1 (default: 100%). This value therefore describes the distance of the projector to the projection surface as a function of its focal length.
Speaker Support is available in all licenses starting with m.objects live. This is a series of functions that make the live presentation of a show considerably more convenient and is used intensively by speakers who regularly stand in front of an audience.
The presentation of an AV show usually alternates between passages with and without commentary. Especially when it comes to going into more detail about a particular image or reactions from the audience, it is difficult to estimate the time required for this in advance. It therefore makes sense to stop the locator at such points during a presentation and only start it again at the end of the interruption.
First of all, this can be done very easily using the keyboard: If you press the space bar during a running m.objects presentation, the locator is put into pause mode, i.e. the show stops. If you press the space bar again, the locator continues to run. This provides you with a simple way of controlling your presentation, which incidentally works in all expansion stages of the software, including the freeware.
However, this control system has a number of disadvantages and is also not very flexible. It requires you to be close to your computer at all times during the presentation, which is certainly not a problem in a small group, but this solution is hardly feasible when the presenter is on stage. In addition, you must always make sure that you do not miss the right moment to stop the locator when using the keyboard. If you press the space bar too late, the locator may already be in the next transition and the following image will be partially visible.
To avoid such unwanted effects, m.objects offers an extremely practical tool in the form of wait marks. You can find the wait marks by clicking with the mouse on the timeline. The associated tools now appear in the tool window, including a pink symbol with a white cross.
Wait marks can be inserted easily and conveniently using the corresponding m.objects assistant. This automatically adjusts the timing. You can find a detailed description of this in the chapter Wizard: Inserting wait marks and adjusting timing.
To set a wait marker manually at a specific position, simply click on this symbol with the left mouse button, hold down the mouse button and drag it to the desired position on the timeline. Release the symbol here. A pink cross will now appear on the timeline. If you want to change the waiting markers later, you can move them on the timeline as required.
Position the locator a little before the waiting mark and then start it. As expected, the locator stops at the waiting mark.
Note: Waiting marks are displayed in the basic license, but as no speaker support is available here, they have no effect.
To continue the presentation, you can press the space bar or use a remote control. This is the only way to show the full strength of the on-hold markers. This allows a speaker to act freely on stage and not have to be near the computer to control their AV show. He simply presses the corresponding button on the remote control and the show continues. This allows you to concentrate fully on your presentation.
If you accidentally press a button on the remote control during a presentation or do not know exactly whether the presentation is already back in playback mode after a wait marker, m.objects offers you additional orientation aids that give you security during the presentation: As soon as the locator stops at a wait marker, all time windows (synchronization, presentation time and time) flash on the user interface and are therefore not visible to the audience. This means you know at a glance whether the locator is at a waiting marker or whether it is already or still in playback mode.
In addition, an optionally switchable indicator shows the pause mode or a wait mark discreetly on the screen, barely noticeable to the viewer. To activate this, open the screen properties (right-click in the open screen / screen settings).
Here you can now select whether the indicator for pause mode and/or for wait marks is displayed. If you remove the checkmark, the respective indicator is deactivated again. You can also select the position of the indicator in one of the canvas corners and set the opacity using the slider.
During the presentation, the indicator slowly fades in as soon as the pause button is pressed or the locator stops at a wait mark. As soon as you resume playback, the indicator slowly fades out again. Thanks to its discreet size and adjustable opacity, it is barely noticeable to viewers.
Tip: When using a wait marker, the time the image remains on the screen should be as short as possible. As soon as you restart the locator after you have finished your commentary, the next transition is immediately visible and the presentation can continue without interruption. Otherwise, you may have to wait until the next image appears on the screen, which leads to unnecessary pauses during which nothing happens on the screen.
The auto ducking function automatically lowers the volume. This is a very practical function, especially in connection with wait marks and asynchronous sound (see below), which saves many manual steps. A detailed description can be found here.
If the locator stops at a wait marker, this will of course initially also affect the sound, as this will only be played back as long as the locator is running over the sound sample. In order not to interrupt the sound abruptly, you can fade it out before the wait marker. On the other hand, a commentary accompanied by music can contribute a great deal to the atmosphere of the presentation, whereas pausing the music may even be perceived as annoying.
For this purpose, m.objects offers the option of using sound asynchronously in Speaker Support: Normally, a piece of music in a soundtrack always runs synchronously to the timeline. This allows you to set up fade-ins and fade-outs to match the music. Asynchronous sound, on the other hand, runs independently of the timeline, which in turn means that it ignores wait marks.
The following image shows two audio tracks with mp3 files. The short duration of the sample in the lower track is striking. This sound sample is now to be set up as an asynchronous background sound for the commentary. There is already a wait marker.
Double-click on the sound envelope of the lower sound sample to open the editing window.
In the lower half, you will see a checkbox labeled asynchronous (continue at wait marks and in pause mode). Place a check mark here.
Below this, you will now find further options for the behavior of the asynchronous sound. The option Only fade in at wait marks with auto ducking (otherwise mute) refers to auto ducking; you can read more about this in the Auto ducking at wait markschapter .
Under Behavior at the end of the clip, you define what should happen when the stored sound file has run through to the end: With repeat, the sound continues to run in the loop and is repeated until you trigger the wait marker manually or resume playback of the show. TheTrigger wait marker option ensures that the wait marker is triggered automatically at the end of the sound playback and playback of the timeline is continued. With the no action option, the sound sample remains muted after the end of playback and the locator remains on the wait marker.
After you have selected the desired options, confirm the form with OK.
The change is immediately visible as the dynamic curve now appears against a light blue background. You can also use this later to immediately determine which sound sample is set to asynchronous.
Crossfades are now created between the two sound samples, i.e. the upper sample is faded out to match the fade-in of the lower sample and faded in again behind the wait marker. The following image illustrates how the arrangement on the audio tracks looks afterwards.
The actual background music fades out shortly before the wait mark and the music for the commentary fades in at the same time. The locator stops at the waiting mark, the image stops, but the music continues to play. When the locator starts again, the asynchronous sound is faded out and the actual background music is faded in again. However, as the asynchronous sound continued to play during the waiting pause independently of the locator, its dynamic curve no longer matches the music being played.
A distinction must therefore be made between the envelope curve, which surrounds the entire range of the sound sample, and the inner dynamic curve of the asynchronous sound. While the dynamic curve no longer corresponds to the sound being played, the envelope curve shows exactly whether and at what intensity the asynchronous sound is output by the software, depending on the position of the locator.
Asynchronous sound runs automatically in a loop. It is therefore repeated as often as required, so that even pauses longer than the sample itself are accompanied by music.
Prompter texts, also known as comments in m.objects, are a useful tool for live presentations. For example, you can use them to display historically relevant data about the lecture as a memory aid or important notes for your presentation. The key point here is that these comments only appear in the program interface and are therefore only visible to the speaker, but not to the audience. Comments can also be a useful tool when programming a multivision.
To insert a comment, first position the locator at the desired point in your show. If you have not already done so, open a comment window via the View menu item. As soon as you click in this window, you will receive the message Should a new comment object be created? Confirm this feedback with Yes.
m .objects now inserts a comment track in a separate component below the existing tracks. In turn, a comment object with the label Text format no. 1 is stored at the desired position on this track. You can now enter a text in the comment window.
During the show, this text appears as soon as the
locator reaches the comment object on the comment track. The text
then remains visible until another comment follows.
If you want to edit an existing comment later, either click directly on the corresponding object in the comment track or simply click in the comment window.
Depending on the position of the locator, you will be asked whether you want to create a new comment. If you click on No, the locator jumps to the comment displayed in the comment window and you can edit it.
Simply add further comments as described above by positioning the locator and clicking in the comment window. If you want to hide the text in the comment window at a certain point, insert an empty comment there - i.e. without text.
Of course, a comment can also be formatted. Select the relevant text area by dragging it with the mouse and then right-click. In the context menu that pops up, you will find a range of formatting options, including Font andBackground color. Here you can enter the values for the font, font size, font color or the background color of the window, for example, and thus customize your comments.
You can also predefine different text formats that you can later use for different types of comments. To do this, create further objects in the tool window in addition totext format no. 1, which m.objects has created there with the first comment as a tool.
With the comment track active, click on the Create object icon in the toolbar. Now enter a suitable name for the new format, click on the Font button and then enter the desired details for the font and other formatting.
Click OK to create the new tool. If you have created several text formats in this way, select the desired format for a new comment in the tool window so that it is highlighted in black.
As soon as you create a new comment by clicking in the comment window, m.objects will apply the formatting of this tool. If no tool is selected, m.objects will use the first comment tool for the new comment. Alternatively, you can also drag the tool directly into the comment track to create a new comment.
You will also find the Scaling option in the context menu of the comment window (right-click in the window).
If you select this entry, m.objects offers you the option of either setting the displayed size of the comment to a fixed percentage value of your choice or having it scaled automatically.
If you select automatic scaling by content here, the program automatically adjusts the text size depending on the length of the comment and the size of the window. In this way, you always have optimum readability of your comments under control.
If, on the other hand, you select the fixed scaling default for your comments, all comments will be displayed in the same font size, regardless of the size of the comment window.
For live presentations in particular, it is a good idea to remove the comments window from its docked position and place it as a separate window above the m.objects desktop. To do this, double-click on the gray bar labeled Comments . The window can now be enlarged as required by dragging the sides with the mouse. In the lecture, the comment window opens as soon as the locator reaches the first comment. With the appropriate scaling, your comments are easy to read even from a distance from the screen.
Another tip for working with detached comment windows: If you insert empty comments here in between, the window disappears from the screen as soon as such an empty comment is reached and is only opened again with the next comment.
In m.objects creative you can set up up to three commentary tracks, in m.objects pro and ultimate up to four commentary tracks and thus manage up to three or four commentary windows simultaneously. This can be very useful for extensive presentations or complex topics, as it allows comments to be differentiated by topic, for example. You can customize each of these comment windows individually as described above.
To set up new commentary tracks, double-click on the bar under the existing commentary track, enter the desired number (maximum 4) in the following window and confirm with OK. You will now see the new tracks in the desktop. When creating a new show, the configuration wizard also offers the option of setting up one or more commentary tracks.
To create a comment on a new track, drag a comment tool from the tool window onto the track in question. A new window will then open, which is assigned to this track and in which you can now enter your comment. Of course, you can also set up this window as a floating window.
You also have the option of assigning the same window to several comment tracks if, for example, you use a large number of comments and want to provide a better overview of the tracks. To do this, click on the wrench in the toolbar.
You can now see the assignment of the comment windows: The Text window 1 entry, for example, corresponds to the comment window of the first comment track. If you would also like to assign this to the second track, delete the Text window 2 entry there and dragText window 1 from the tool window to the second track. Then click on the wrench again to return to the standard view
m .objects offers you the option of saving the comments from your show - either all or a selection of them - as a text file. This can be helpful, for example, to create a script from the comments and print it out or to process it in another program.
You can find the function for this in the context menu of the comment component. If you want to export all comments, right-click in the empty area of a comment track and select the Export comments (all) option.
To export only certain comments, select them and right-click on one of the selected comments. Now select the Export comments (selection) option.
In the following window, enter a name for the text file. m.objects saves the file in the project directory of your show by default. Alternatively, you can of course select a different directory here. The comments are then saved as a .txt file.
In addition to the actual comments, the file itself also contains the exact time of the respective comment on the timeline.
In addition to controlling a presentation using a mouse or keyboard, m.objects also offers the much more convenient option of using remote controls. This is particularly useful for presentations, as a remote control allows you to move freely around the room or stage and perform the desired actions at the touch of a button. Especially in combination with the Speaker Support functions described above, the use of a remote control is the method of choice.
The device shown has a range of approx. 20 meters. Recognition takes place automatically under Windows when the supplied USB receiver is plugged in; a separate driver installation is not necessary. After plugging in the receiver, m.objects or the presentation file may need to be restarted, as the presence of the remote control is only queried at startup. The remote control can be used for real-time rendering both from m.objects and from EXE files.
You can use almost any remote control that can be connected to your computer. Above all, however, you have the option of freely assigning the buttons on your remote control. This allows you to set up your own personal m.objects control unit. This also applies to the keyboard, where you can assign specific m.objects functions to any key.
Open the Settings / Buttons / Remote control item in the program menu.
This opens theDefine buttons window.
The window shows a complete overview of the assignment of the presentation functions to the buttons on the keyboard or remote control: In the left-hand column you will find the individual functions, in the two columns next to them the assigned buttons. In the default settings, the buttons for the remote control are in the right-hand column and those for the keyboard in the middle column. However, you can change this division as required.
Each assignment that you see here is designed as a button and can therefore be changed directly in this window. Here is an example: The Stop function for pausing the presentation is accessed by default via the Esc key. On a remote control, you will often also find a standard button for this function, labeled here as HID Stop. HID stands for Human Interface Device, a somewhat simplified technical term for remote control.
If you now want to assign a different key to the stop function on the keyboard, click on the [Escape] button. The following window appears, prompting you to enter the 'new' key.
Now press the S button (for example). The window that has just opened will disappear and you will see the new assignment in the overview.
You can now stop a running presentation with the S button after you have of course also confirmed the Define button window with OK. To change the keyboard assignment on the remote control, proceed in the same way, i.e. click on HID-Stop and press the desired button. The new assignment appears immediately in the overview.
The illustration shows a possible change, but may look different depending on the button you have pressed. If you select a button that is already assigned to another function, a corresponding message appears.
If you confirm this message with Yes, the previous use of the button will be replaced by the new function. You should then assign a new button to the previous function if necessary.
If you want to reset an individual button assignment to the default, simply click on the reset button next to it. To reset all values, use the Reset all button in the bottom line.
You also have the option of saving one or more key assignments as profiles. This allows you to save your individual key assignments permanently or create profiles for several users of a presentation PC, so that each of these users has their own key assignments via Load profile.
For key control of the index markers, you will find the Index single digits entry in the overview and Index direct below it. With Index single dig its, you can assign the numerical values 0 to 9 to any buttons so that you can then select index 00 to index 99 with two digits. Let's assume you want to assign the numerical value 4 to button A. You select 4 from the drop-down menu, then click on the wide button and enter A.
This assigns the value 4 to button A.
In the same way, assign the value 2 to button B, for example. To move the locator to index marker 42, press buttons A and B in quick succession, which m.objects now interprets as 42.
With the Index directfunction, on the other hand, you can assign a button to individual index markers from 00 to 99 so that you can select the corresponding index with just one button.
You also have differentiated input options for the button control of the sound output. You can assign any button to the functions louder and quieter as well as ducking (see the following chapter Manual ducking for spontaneous moderation). The special feature here is that you can control either the entire sound output or individual sound cards. If you are working with a PC that has several sound cards, you can assign different sound cards to individual sound tracks of your presentation and control their volume separately at the touch of a button. For example, while the volume of one or more sound tracks is reduced, the other sound tracks remain at the same volume.
It is not always possible to precisely define the moderated areas of a presentation from the outset and set up corresponding waiting marks. For this reason, m.objects offers manual ducking for spontaneous moderation as part of Speaker Support. This allows you to use a freely selectable key on your keyboard or remote control to reduce the volume of the m.objects show by an adjustable value during playback, thus ensuring the intelligibility of your live comments.
As described in the previous chapter Control via remote control, use the Define button selection window, which you can access via the menu itemsSettings, Buttons / Remote control. You will find the Ducking entry at the bottom. Once you have selected the button, you only need to press it once during the presentation and the sound will be muted. Pressing the button again returns the sound to its original volume.
In most cases, you can work with the default values without making any further changes. However, you also have the option of adjusting the attenuation of the sound and the times for fading out and fading back in. To do this, select View / Driver Assignment in the program menu, activate the audio tracks with a mouse click and then double-click on the sound card used for the sound output in the tool window.
Here, check theEnable ducking box, enter the desired values forattenuation, fade time and fade-in time and confirm with OK.
As soon as you have activated manual ducking in the lecture by pressing a button, the audio status window on the m.objects program interface shows a conspicuous pattern that is not visible to the audience. So if you are ever unsure whether ducking is still switched on or not during a live presentation, just take a look at the program interface.
You can also define any points on the timeline at which a possible volume reduction by ducking is always reset. So if you forget to turn the volume up again after a spontaneous comment, m.objects will automatically do this for you at the desired point. All you have to do is insert a single marker in the time ruler, tick the Reset ducking of all sound cards entry in the properties window and confirm with OK. As soon as the locator passes this point in the lecture, the volume will be adjusted back to the default value if required.
With the speaker preview, you always have the course of your presentation in view during the live lecture. Like the commentary window, the speaker preview only appears on the m.objects program interface and is therefore only visible to the speaker, but not to the audience. The speaker preview can also be integrated into the m.objects desktop in docked mode, as well as positioned and scaled as required on the screen in window mode. This means that you can easily see the preview even from a greater distance from the screen.
First open the speaker preview via the View program menu. Right-click in the new window to open a list with the different view options.
Of particular interest here is the option of having the complete live image from the screen run in the speaker preview window. This allows you to turn your attention to the audience during the presentation and still keep an eye on what is happening on the screen without having to turn to the screen. To do this, select the option Live image only (screen) orLive image + next image thumbnail. In the second case, the next image on the image tracks appears below or next to the live image (depending on the orientation of the window) so that you can also keep an eye on the progress of the presentation.
Please note that displaying the live image in the speaker preview requires a certain amount of additional graphics performance. Make sure that your hardware offers sufficient performance in full screen mode of the screen (in the extended desktop under Windows) for a smooth process.
If you select the current + next image thumbnail in the view options instead, the speaker preview will show you the current image of the presentation on which the locator is currently located and which your audience can see, as well as the following image during your presentation.
Alternatively, you also have the option of displaying only the current screen thumbnail or only the next screen thumbnail in the speaker preview window, i.e. only the current or the next screen of the presentation.
For a better overview, the current image or the live image is marked with a light red frame in the speaker preview, while the following image is displayed in a light blue frame.
You also have the option of excluding individual images from the display in the speaker preview. This can be useful, for example, for texts and titles that appear simultaneously with an image in the canvas or for complex picture-in-picture compositions in which images are superimposed on several image tracks. Here you can specify which of these images should appear in the speaker preview and which should not. Of course, this function does not affect the display of the live image, in which the complete screen image is always displayed.
You will find a square with a dot in the top left-hand corner of the light curve thumbnail.
This indicates that the relevant image is displayed in the speaker preview.
Click on the item to remove it by confirming the following window with Yes. The image will then no longer appear in the speaker preview. You can add it back to the display in the same way. You can also select several images beforehand and apply a change to the display in the speaker preview to all selected images using multi-edit.
By default, the display in the referent preview is initially set for all objects that you insert into the image tracks. To change this property, double-click on the Standard tool in the tool window (with active image tracks) and remove the checkmark next toReferent preview (proxy image display).
The Synchronizationwindow shows you the exact position of the locator in the show to the millisecond. If the window is not open, right-click on the timeline and select the Show status window option in the context menu.
However, the synchronization window offers even more options. Double-click in the window to open the Enter timecode form. Here you have the option of navigating to a specific time position on the timeline. To do this, enter the desired time value in the fields for hours, minutes, seconds and milliseconds and confirm with OK. m.objects will then position the locator exactly at the specified position.
Alternatively, check the Countdown modeoption in theEnter timecode form and then confirm with OK. As soon as you switch to pause or play mode, m.objects displays the time remaining until the last object on the timeline. This mode is clearly indicated by a yellow background. In this way, you can decide, for example, how much time is left for live comments during a lecture if you want the timeline to run to the end.
To switch back to the original view, double-click in the synchronization window and uncheck Countdown mode.
The time factor also plays an important role in live presentations. Here, m.objects offers two practical tools to help you keep an eye on the time: the presentation time and time of day displays. Both can be found in the View menu. These displays can also be displayed docked in the workspace or as free-floating windows and scaled as required.
The presentation time shows you the continuous duration of your presentation. As soon as you start the locator, the timer starts counting. In the default setting, the time is counted up to the second starting at 00:00:00.
Double-click in the window to enter a desired presentation duration, for example 30 minutes.
During the lecture, the window shows you the elapsed time. As soon as the specified duration is reached, the font color changes from light green to red.
If you check the Countdown mode box in the input window for the presentation duration, the specified time is counted down.
In contrast to standard mode, the time is displayed in blue. Here too, the color changes to red as soon as 00:00:00 is reached, and from then on the exceeded time is displayed.
The display for the presentation time naturally continues to run even if you switch the locator to pause mode or it stops at a wait mark. Only when you stop the presentation, i.e. by pressing the Esc key or clicking on Stop, is the presentation time reset to 00:00:00 or, in countdown mode, to the specified duration.
You can also reset the presentation time at pre-selected points on the timeline. This is particularly useful if you first run a lead-in in the loop and the actual presentation only starts afterwards. The presentation time should of course only be started from this point.
To do this, insert a single marker in the time ruler at the appropriate point and check the Reset lecture time box in the properties window.
The Time window first shows you the current time of day.
Here too, double-clicking in the view opens an options window in which you can enter a scheduled end time for the lecture. If you want the lecture to end at 15:30, enter this time accordingly.
At 15:30, the font color in the Time window changes from grey to red, indicating that the scheduled end of the lecture has been reached.
If you have selected countdown mode, the remaining time until the specified end time is counted down. The color changes to red as soon as this time is reached. The elapsed time is then also displayed here.
The interactivity functions are available in all licenses starting with m.objects live.
To play an m.objects presentation, you can use the standardplay, pause and stop functions, either via the keyboard, mouse or remote control. You also learned about the use of wait marks as a means of controlling the m.objects timeline in the chapter on Speaker Support. This chapter now deals with two further m.objects tools with which you can influence the flow of a presentation and create interactive applications:
Index/skip marks and area marks allow you to directly select freely definable locations in an m.objects presentation and to play freely definable areas as often as you like. In conjunction with interactive image fields, you can even call up individual presentations from a menu by clicking on the canvas. These functions are explained in detail using an example.
If you click on the timeline in the program's desktop, the associated objects are displayed in the tool window. A presentation must be open for this.
Here you can now see - among other tools - the range marker with a blue square and a black arrow in front of it as well as the index/jump marker, symbolized by a grey square with the letter i.
Hold down the left mouse button and drag the symbol for the index/skip marker to any position on the timeline and release the mouse button.
An index marker with the number 01 now appears on the timeline. If there are already other index markers on the timeline, the number may be correspondingly higher. The properties window opens or is opened by double-clicking on the symbol.
You can initially assign a name here. Although this is not absolutely necessary, it can be very helpful if you work with a large number of index/jump labels later on. The Jump to indexselection field now offers you at least the two options No jumpand 00 - start. The 00 marker is automatically placed at the very beginning of the timeline as soon as you use index/jump markers. In contrast to all other markers, it is permanently positioned and cannot be edited further.
Now enter the selection 00 - Start, confirm with OK and then start the locator before the new index/jump marker. As soon as it reaches it, it jumps back to the beginning of your presentation. If you place the marker at the end after the last image or sound sample, the presentation will run in a loop. It will then start again and again from the beginning until you stop the locator.
The fade time when jumping to an index/jump marker can also be changed. In themarker properties, you will find the Fade time on entry option.
Enter the desired value here. This determines how long the fade lasts when the locator jumps to the selected marker. The default value is 1 second.
You can easily create additional index/jump markers by dragging the symbol from the tool window to the desired position on the timeline, as described above. These markers are then also available for selection under Jump to index in the properties window. If you have assigned names, these will also be displayed here to make selection easier, otherwise only the index number will appear.
You can also select each index/jump marker directly while the presentation is running and ensure that the locator jumps to it by entering the corresponding number on the numeric keypad of the keyboard, for example 02,06, 12 etc.. Make sure that the numeric keypad is activated. If this is not the case, press the [num] key.
You can use index markers to provide structure and an overview, especially for extensive presentations. All index markers are displayed in the index list, which you open via View / Index list.
Here you will find the respective start time, the description (the name you have given to the index marker) and the duration of the individual chapters, i.e. the time interval between the individual index markers. If you have defined a jump target, this will also be displayed here.
As an alternative to entering the two-digit index number using the keyboard, you can navigate to the corresponding index on the timeline by clicking on it in the index list. This allows you to maintain an overview even with large productions and reach individual areas of the show quickly and easily.
This function is available in m.objectsultimate or in m.objects licenses with the Remote-Inmodule.
You can use this to set a real-time default (time and/or day of the week or date) for the execution of index marks. When this time specification is reached in Pauseor Play mode, the locator automatically jumps to the relevant index marker. This allows you to set a one-off execution at a very specific time as well as constantly recurring triggers (every minute, hour, day, week, month or year) - for example, for an installation in which a specific part of a presentation is to be started daily at a fixed time.
To use this function, check the option Time-controlled entry in the properties window of the relevant index marker. If necessary, activate the Start playback if in pause mode option if the presentation is to wait for execution in pause mode.
First enter the desired time in the properties window. If no other options are selected, the seconds are used as the time for execution. For example, if the entry 10:20:00 is specified, the index marker is executed every full minute (i.e. when the second value 00 is reached).
If you now activate the values for minute or hour under Include, you can further limit the execution accordingly. It then takes place every 60 minutes (always 20 minutes after a full hour) or once a day at 10:20.
With the other options, you can now specify whether the execution should take place once on a specific date, once a year, once a month or on a specific day of the week.
As the name suggests, you use range markers to control a specific area within your m.objects presentation. In contrast to index/jump markers, you always set a start and end point.
To do this, drag a range marker from the tool window (with the timeline activated) onto the timeline with the left mouse button pressed and release the mouse button. A range marker symbol is now placed on the timeline and a blue line extends from here to the right.
Now place a second area marker a little to the right of the first one. The line now connects the two range markers.
The area marked by the line can now be repeated as often as required. In contrast to the index/skip marks, you have the option here of specifying a certain number of passes. To do this, double-click on one of the range markers, which opens the Range propertieswindow. It does not matter which of the range markers you double-click on, as the values you enter here are automatically transferred to the other marker.
First check theRepeat range box and confirm with OK. The blue line between the range markers then turns green. If you now start the locator, it will run in a continuous loop between the two markers until you stop it again. Double-click on a range marker again and place another tick next to Total runs. Next to it, enter any numerical value and confirm again with OK.
The selected number is now displayed at the first range marker and the locator repeats the range the corresponding number of times during playback.
Similar to the index/skip markers, you can also change the fade time in the range properties, i.e. the duration of the fade from the second back to the first range marker.
You can of course create several such areas on the timeline. For more complex applications, combine range markers with index/jump markers as described in the following section.
Things get really exciting when you combine index/jump marks and area marks, add interactive image fields and underlay them with asynchronous sound. This sounds more complicated than it is and, above all, offers a wide range of possibilities for creating interactive applications.
An example will demonstrate how you can create such an application. The following screenshot shows an overview of the arrangement on the timeline.
A title is initially stored in each of the top three image tracks. All three titles together form a selection menu that is repeated with the help of area markers until the viewer clicks to switch to a short presentation on the topic of "Mexico", "Hawaii" or "Munich". The short presentations follow behind on the timeline. The selection menu is underlaid with images in the fourth and fifth image tracks. Asynchronous sound ensures that background music can be heard without interruption.
If the viewer clicks on one of the titles - similar to a hyperlink on a website - the locator jumps to the corresponding index marker. It fades seamlessly into the short presentation, and the transition between the pieces of music also takes place in a fade. At the end of the short presentation, the locator fades back to the selection menu. If the locator is in the middle of one of the three short shows, the direct route back to the menu is via a button that is displayed at the bottom of the screen. Two further buttons make it possible to navigate forwards or backwards frame by frame. These buttons are also arranged as text in the top three image tracks and are positioned behind the selection menu.
The texts in the selection menu at the front are provided with passe-partouts that take on different colors depending on the background image. They also each contain a shadow/shadow object. The index/skip marks and range marks, which are stored on the timeline, are particularly important for understanding the application.
In the selection menu area, there is the index/jump marker 01 - Menu and two range markers behind it, which are connected by a green line. After starting, the locator first runs through to the second range marker and then jumps back to the first range marker. TheRepeat range option is set in the properties of both range markers.
Similarly, there is an index/jump marker at the beginning and end of each of the three short presentations. The locator jumps to the marker at the beginning - for example 02 - Mexico - as soon as the corresponding 'link' on the screen is clicked. From the index/jump marker at the end, the locator jumps back to the selection menu. These functions are set in the properties of the index/jump markers.
The linking, i.e. the option to select a presentation with a mouse click, takes place via interactive image fields.
The light curve of each text in the selection menu contains an image field object. The interactivity button is located at the bottom left of the properties window. Click on it to open the Interactive image field window, where you can specify which index/skip marker the mouse click should lead to. Under Jump to index, all index/jump marks on the timeline are available for selection.
There are also options here to display the mouse pointer as a hand in the area of the interactive image field and/or to position the mouse pointer immediately on this image field when it is displayed. The buttons that can be seen during the short presentations are also created in this way. The buttons for Image forward and Image backdo not refer to an index marker, but are assigned to Image forwardand Image back in the properties for Interactive image field.
Interactive image fields are of course also suitable for use with touchscreens, where the presentation can be selected simply by touching the desired button. This makes such applications particularly interesting for exhibitions, museums, events and many other occasions where the audience can actively participate.
In order to achieve a clean transition not only between the image sequences but also between the sound samples in this example, a combination of asynchronous and synchronous sound is used here. The sound sample that can be heard during the selection menu naturally runs asynchronously, so that the sound continues to run at the range markers or between the index/jump marker and range marker.
When jumping from the selection menu to the short presentations, the asynchronous sound fades to the synchronous sound. A simple but effective trick is used here: the asynchronous sound extends across all individual parts of the presentation.
Only its envelope curve is lowered in the meantime and rises again at the corresponding points. This means that the asynchronous sound is faded out when jumping from the selection menu to a short presentation, while the asynchronous sound is not heard in the area in between on the timeline. This creates a clean transition without disruptive interruptions. You can find more information on asynchronous sound in theSpeaker Support chapter.
Interactive image fields can also provide visual feedback - comparable to hyperlinks on websites - if the mouse pointer is positioned over them when they are clicked on or touched on a touch panel.
To do this, place the dynamic object shadow / glow, passe-partout orimage / video processing on the light curve in which the interactive image field is located. Then set the desired effect in the properties of the dynamic object.
In the example from the previous chapter, the visual feedback is provided by the passe-partout and shadow/appearance objects on the light curves of the titles "Mexico", "Hawaii" and "Munich" in the selection menu. Each of these titles is provided with several passe-partout objects so that the background color of the titles changes when the background images are changed. This ensures optimum readability.
The opacity is set to 70% in the properties of the passe-partout objects. If the mouse pointer is now positioned over the sensitive area of the image field (or if it is clicked or touched on a touch panel), the value defined in this dynamic object is increased so that the user receives immediate visual feedback about the planned or triggered action. In this case, the opacity of the button background is significantly increased, as can be seen in the following screenshot of the "Hawaii" button.
At the same time, the shadow/shine effect is also significantly enhanced.
A detailed description of working with the dynamic objects mentioned can be found in the chapter Dynamic objects.
With interactive screen fields, you can not only control index marks. There is also a whole range of other functions that you can trigger using the interactive image fields. These can also be found in the properties window of the image field after clicking on theinteractivity button:
- Picture forward/backward
- 10s forward/backward
- Index forward/backward
- Playlist forward/back
- Waiting mark forwards / backwards
- Pause/Play
- Continue
- Exit
With these functions, for example, individual control bars can be created and permanently integrated into the presentation so that the viewer can later interrupt/resume playback or repeat or skip parts of the presentation.
All control functions of interactive image fields work both when playing back a show from the timeline and when playing back using a presentation (EXE) file.
Interactive image fields can also be stored with a live zoom factor instead of the control functions.
With this function, the viewer can enlarge images with a soft animation by clicking the mouse or touching the screen, for example to make small fonts or image details easier to recognize.
To do this, place an image field object in the light curve of the image to be enlarged. In the properties window, click on Interactivity and select the Live zoom option in the following window. Enter the desired magnification here. Then confirm twice with OK.
The magnification is then centered on the position of the image that the viewer selects with a mouse click or touch. In enlarged mode, the image section can also be subsequently moved. Clicking or touching again reduces the image again and returns it to its original position.
It is irrelevant for the function whether m.objects or the EXE file is in pause or playback mode or is currently on a wait marker.
A multivision can be exported from m.objects as a video in various formats by selecting the Export as video sub-item in the File program menu.
The options available here are H.264 / H.265 video, Windows Media Video, AVI video,MPEG-2 video and single frame sequence. Select the desired format with a mouse click. This opens a window that is identical for all options in the general area.
The H.264 / H.265, WMV, AVI and single frame sequence formats are also suitable for exporting stereo 3D productions.
Here you can specify the values for frame rate, resolution and aspect ratio.
Special presets are available for exporting in H.264 / H.265 format, which do not necessarily require manual input. A more detailed description can be found further down in the text.
The appropriate value for the frame rate depends on the application for which you are creating the video or the output device on which you are running the video. The abbreviation fps stands for frames per second.
A value of 60 fps is particularly suitable for playback on computer systems. On a powerful PC, videos with 60 fps result in completely smooth playback. For most less powerful PC systems, a value of 30 fps is the right choice for a good compromise between smooth transitions and motion sequences and the required computing power. 50 fps corresponds to the standard for European TV sets, but many of the integrated media players also reproduce 30 fps without any problems. For other output devices, information about the preferred frame rate should be included in the operating instructions.
The resolution and aspect ratio also depend on the desired use of the exported video. For output on a TV set with Ultra HD resolution, the values 3,840 x 2,160 should therefore be entered here for the resolution and 16:9 for the aspect ratio.
When exporting as MPEG-2 video, the values for frame rate and resolution are already fixed, as in this case the video is always exported in PAL resolution.
With the global zoom, the actual video content can be reduced in size if required so that a black border is included around the video. This option is only used in cases where the output device used for playback cuts off the outer edge of the image due to an incorrect setting (overscan). Normally, the 100% setting is recommended.
The other areas of the export window offer different options depending on the selected video format.
The H.264 or H.265 video formats are the right choice for many applications, as they are very versatile and deliver very good output quality. You can use these formats on all possible operating systems, so in addition to Windows, you can also use them on macOS, Android or iOS, for example, and therefore also for mobile devices such as tablets or smartphones. Most newer TV sets can also read these video formats directly from a USB data stick, for example.
In the export settings for the H.264 / H.265 format, you will find the aforementioned Preset option at the top. Click on the selection box to open a list in which you will find numerous presets for different purposes. There are presets for mobile devices such as iPhone, iPad or Android devices or for online videos on YouTube or Vimeo, and of course suitable presets for different resolutions and frame rates up to Ultra HD with 60 fps. If you select such a preset, all other details in the export form are already set accordingly so that you only have to confirm with OK. However, you can also - starting from a preset - adjust the other values individually as required.
Under Container type, specify the file format in which the video is to be saved. As a rule, use MP4 here. If the output device prefers a different format, select MKV or MOV instead.
Under Compression, you can choose between H.264 and H.265 (HEVC). H.265 is significantly more efficient and produces approx. 40% smaller files with comparable quality, correspondingly higher quality with the same file size. The prerequisite is that the output device supports H.265. You can find the relevant information in the manufacturer's documentation. If H.265 is not supported, select H.264 here.
If thebit rate can be set in the export settings - this applies to H.264 / H.265 and WMV - first decide whether the exported video should have a constant or variable bit rate.
The constant bit rate (CBR) is suitable for performance-reduced systems or connections, e.g. for less powerful computer systems or for playback via slower networks with low bandwidth. If this is not the case, select the variable bit rate (VBR) here instead.
Below this you will find a slider with which you can set the bit rate (for CBR) or thequality (for VBR), which in both cases has an effect on the output quality of the video: The higher the value, the higher the quality and, of course, the file size. Values between the high and extreme markers result in only a slightly perceptible increase in quality, while the file size increases significantly. You can also enter the value numerically in the input field. In most cases, the default value is recommended, which you can return to at any time by clicking on the corresponding Default button.
Especially when exporting in the H.264 / H.265 formats, there is the option of significantly accelerating the rendering of the video with the support of a suitable graphics card. Compared to exporting the video without such hardware support, the export is up to ten times faster. In this context, it is important that an up-to-date graphics driver is installed. You can download the latest driver from the respective manufacturer's website.
At the bottom of the export form you will find the Hardware support option. Select the entry that matches your computer's graphics card here.
You can choose between AMD graphics hardware, Intel graphics hardware and NVidia graphics hardware. If the subsequent export of the video fails despite the current graphics driver (see above), either an incorrect entry has been selected - in this case, correct your entry when exporting again - or the graphics card is not suitable for hardware support. In the latter case, select one of the two top optionsCPU (fast, lower quality) or CPU (slow, higher quality) under Hardware support. These two options generally work on all PCs.
Especially with AMD graphics cards, it has been shown (as of the end of 2019) that exporting in H.264 format often results in reduced output quality. For computers with AMD graphics hardware, we therefore recommend exporting in H.265 format if possible or selecting the CPU (fast ...) or CPU (slow ...) option.
The AVI format is suitable for further processing of the video with an editing program or another application and for uncompressed output. To do this, first select the corresponding option under Export as video, then enter the frame rate, resolution and aspect ratio and, after confirmation, the Mix video file window appears.
Once you have confirmed this after entering a name for the file, you can select the desired codecs for video and audio compression in the following dialog or enter the uncompressedoption. Please note that in the latter case, extremely large files may result.
Which codecs are available here and which you can select depends on the software environment on your computer system and on the subsequent use of the video. After this selection, the video is created by the video generator.
Further information on video codecs can be found in the Video chapter.
If you select Export as single image sequence, m.objects creates a sequence of individual full images in JPEG, BMP, TIFF, PNG or JPEG 2000 format. Use the Frame rate option at the top of the window to specify how many frames are created for one second in the timeline. This type of video is not intended for direct playback in a player, but rather for further processing in another program. The lossless TIFF format guarantees high output quality, which is why professional video editing and post-production programs usually have a corresponding import option. For the JPEG and JPEG 2000 formats, you can also set the compression quality using the slider at the bottom.
You should only use this option to export videos that you then want to burn to DVD using suitable authoring software and play on a DVD player. The export takes place here in PAL resolution, i.e. with 720 x 576 pixels. Due to these limitations, the quality of such an MPEG-2 video is nowhere near the output quality directly from the m.objects timeline.
The Use anti-flicker filter option is intended for playback with older TV sets, where flickering effects may occur in sharp areas of the picture. You do not usually need this option for playback on newer TV sets. Using the anti-flicker filter slows down the video creation process and leads to a certain, in this case deliberate, loss of vertical resolution and therefore also of sharpness.
For the creation of higher-quality DVDs, we recommend outputting a high-resolution H.264 or H.265 video instead, which is further processed in suitable authoring software.
Once you have made all the entries and confirmed the video export window with OK, the Mix video file window appears. Enter a name for the video here and confirm with Save.
Only in the case of an AVI video will the data compression window already described under Export as AVI video follow, which must also be confirmed with OK.
You can now follow the creation of the video on the screen. Once the video generator has started its work, it works completely independently, so you can use the PC for other tasks while the video is being created, including continuing your work with m.objects.
m.objects saves the exported videos in the MixDown folder by default, unless you specify a different path when saving.
When exporting a video from m.objects, you have the option of exporting only a specific section instead of the entire timeline - i.e. the beginning to the end of the show - regardless of the video format. Read the chapter Defining export areas for more information.
You can save particularly computationally intensive arrangements on the image tracks of your project separately as a video in order to subsequently insert the newly created video into your presentation and thus reduce the load on the computer's processor or graphics card. This procedure is ideal, for example, for an extremely fast image sequence that you want to use as a timelapse.
The short sequence of individual images would overwhelm any computer when played back in real time, even with a very powerful graphics card. The finished video, on the other hand, has only moderate requirements. Similarly, an extremely high-resolution video can be exported as a video with a lower resolution and therefore lower performance requirements and replaced with the new video.
First select the objects that you would like to export as a separate video. Then right-click on any handle or on the bar under a light curve and select theExport video (selection only)option in the context menu. The form for video export in H.264 / H.265 format then opens, in which you can make the settings for the video export. You can find a detailed description of this in the Export in H.264 / H.265 format chapter.
In contrast to exporting an entire show, in this case m.objects creates the finished video in the video directory of your project in the exportedsubdirectory.
The reason for this procedure is simply that you usually use the video directly in the timeline of your project. However, if you would like to save the video in a different folder, select the desired directory in the Export video file window.
When exporting a selection as a video, you can of course also integrate the sound on the audio tracks by also selecting the corresponding sound sample(s). During the subsequent export, m.objects only takes into account the area of the audio tracks that lies within the selection on the image tracks. Sound before or after the selection on the image tracks is therefore not exported.
As of the m.objects creative expansion level (i.e. also in m.objects pro and m.objects ultimate), the image evaluation mode is available. This allows both individuals and groups of up to 10 jurors to view a large number of images together and evaluate them using various methods.
The main advantages of using m.objects over conventional solutions are:
- Noprogram interactions (e.g. mouse clicks) required during an evaluation run
- Highdisplay quality, even above UHD / 4K if required
- Delay-freeprocessing even with a very large number of images (e.g. competition)
- Extremelystable software environment with automatic data backup
- Evaluation runscan be documented
- Management ofseparate input devices for each juror
- Automaticevaluation and statistics
A project for image evaluation is basically nothing more than an m.objects show, which you therefore create in exactly the same way using the project wizard under File / New show. You should provide at least two image tracks and the aspect ratio should correspond to that of the display device used.
Before importing the images, first activate the evaluation mode in theControl menu.
In the following window, the checkbox Activate evaluation mode must now be ticked.
You should select the option Always reassign buttons if the input devices of the participants may have been reconnected or swapped between several evaluation runs. You can find more information on this further down in the text.
You can also specify here whether the participants' scores are displayed openly or hidden in the scoring runs. If the check mark is set for Hide evaluation, only whether the evaluation has already taken place is displayed later, but not how it was evaluated.
The random image order option means that images are not created in their order from the lightbox or (when importing via drag & drop directly from the Explorer) from the folder structure during import, but are randomly mixed together on the image tracks. For example, competition images are not displayed in the order of the photographers, the date they were taken or the camera model.
The Display statisticsfunction can only be used after at least one completed evaluation run.
Now import the images into the timeline either via the lightbox, via the context menu (right-click on an image track) or via drag & drop directly from Windows Explorer. You can of course also drag a folder symbol onto the image tracks so that m.objects distributes all images from this folder and any existing subfolders to the tracks.
During distribution, m.objects takes into account the settings for blending and spacing selected in the Evaluation mode form. In addition, each image is automatically assigned a wait mark. The value for the distance determines the time interval between the start of the fade-out of an image and the fade-in of the next image. If the value is 0.00, the result is a normal crossfade. If the value corresponds to the value in theCrossfade time field, there is a complete fade to black before the next image appears. A slight overlap as in the example above has proven to be aesthetically pleasing and fatigue-free in practice, even for long evaluation runs.
To start an evaluation run, simply press the play button. Please note that the evaluation mode (see above) must be activated for this. If you have closed and reopened the program in the meantime, you may have to reactivate it first.
The Define buttons form now appears. Under Number of jurors, you can set the number of people involved (1 to 10). Specifying 1can be very useful for the quick and effective selection of images from a large image pool for a show by a single person.
If a separate input device (e.g. a USB keyboard or a USB numeric keypad, a USB button, a USB remote control) is not available for each juror, different buttons on the same input device can also be used by different jurors.
Each evaluation run can be interrupted at any time, saved and resumed later.
Three different evaluation modes are available:
The +/- variant enables an initial selection of images, you can then only vote withpositive or negative.
If this has not already been done in advance, the buttons for the positive and negative evaluation of each juror are now assigned automatically. It is sufficient for the jurors to press the buttons requested on the screen once in succession.
Once all assignments have been made, the rating run begins with the first image that has not yet been rated. If the locator is placed in front of already rated images at the start - e.g. after an interruption - m.objects can also reset the ratings of the following images after a query.
During the evaluation, icons in the bottom right-hand corner of the screen show whether or which vote has already been cast, depending on whether theHide evaluation option was previously selected. A gray question mark symbol means that the corresponding juror has not yet made an entry. As soon as this has happened, the color changes from grey to green. If the evaluation is not hidden, a green circle appears instead for a positive evaluation and a red circle for a negative evaluation.
A submitted evaluation cannot be withdrawn without interrupting the evaluation process.
As soon as all jurors have submitted their ratings, m.objects will fade to the next image without delay.
During the run, the status bar of the m.objects main window shows the number of images still to be evaluated and the total number of images at the bottom left.
After each evaluation, m.objects creates a backup so that in the event of a technical problem such as a power failure, the last version is automatically reopened (file type *.moa, m.objects Autosave).
Once the complete run has been completed, the current status can be saved again. For documentation purposes, this can be saved in the same project directory under a new name(File / Save show as...).
In this mode, you can rate images more differentiated with a score from 0 to 9, i.e. in 10 levels. If this evaluation run was preceded by a selection run (see Evaluation mode +/-), only the images previously rated as predominantly positive are now automatically used. However, it is also possible to start this run directly and without prior selection.
Again, the automatic assignment of buttons appears if required.
Incidentally, it is possible that all judges use input devices of the same type and the same keys on each of them, so that blocks of ten are well suited for this evaluation run. The program can differentiate on which of the input devices the corresponding key was pressed.
In rating mode 0 ... 9, the differentiated rating now appears in the bottom right-hand corner of the screen. If the rating is hidden, the question mark symbol also changes color from grey to green as soon as the rating has been submitted. Once all ratings have been received, m.objects immediately switches to the next screen.
At the end of this run, a new save can also be made in the same project directory using File / Save show as...for later documentation.
To start this evaluation mode, at least one of the two runs described above must have already been completed, as the images will now appear in a grid on the screen after the resulting evaluation. Unlike in the previous modes, one input device per juror is not useful here. Instead, control can be via a single device, e.g. a normal keyboard.
The discussion tableau contains the rated images in descending order, starting at the top left. The picture with the best rating is therefore at the top left. Next to the top ten images is the ranking (again the numbers 0 to 9, corresponding to places 1 to 10). The currently selected picture is surrounded by a white frame. You can use the arrow keys to move the selection. By pressing the Enter key, the currently selected image is displayed in full screen together with its rating. You can also use the arrow keys to switch between the images in the full screen. Pressing the Enter button again switches back to the matrix view.
To change the ranking of a selected image manually after a joint discussion, simply press the corresponding number key. The image will then move to the corresponding position in the matrix and all other images will move up accordingly.
When a rating run is started, the images are automatically arranged on the m.objects timeline according to the existing rating. The highest rated images are always at the end of the timeline. This order is updated immediately when the ranking is changed manually during the final discussion.
If you only carry out a single rating run (selection +/- or differentiated rating 0..9), you can also immediately afterwards manually ensure that the images are arranged on the timeline according to their rating. To do this, select the Sort by rating option in theControl menu.
In addition, you can also use themedia list from the View menu to keep track of the current status during the judging process or to inspect it after a judging run has been completed. Next to each image entry you will find the rating(s) submitted. Using the context menu of the media list, you can export a simple list with the file names and the respective rating for further external processing. The top-ranked images can also be found at the end.
The Live Video function is available in all m.objects ultimate licenses and in older pro licenses with the Live Video add-on module.
Live Video offers you the option of integrating an external video source into your presentation. You can display it on the screen in full screen or integrate it into the action on the screen as a reduced window with an image field.
In this way, for example, the internal camera of the presentation notebook can film the speaker and transmit the image live into the presentation. However, you can also use a capture card to connect external devices, such as an external camera, another computer, a media player or a BluRay player. This external device then sends its video signal via HDMI to the capture card, which in turn is connected to the presentation computer via USB.
To work with Live Video in m.objects, first insert any video into an image track. The nature of this video (resolution, frame rate, etc.) is completely irrelevant. If you want to play the live video later in a reduced size against a background, make sure that the video is on one of the upper image tracks and is not overlaid by other content.
The length of the video on the timeline corresponds to the duration for which the live video can be viewed later.
Double-click on the light curve to open the video properties window.
At the top, check the box next toLive source.
In the Video file field, you must now tell m.objects which source is to be used for the live video. This source - for example, an internal or external camera - is available as an index. The first device known under Windows has index 0, other devices, if available, follow with index 1, 2 and so on.
As soon as you have confirmed the form with OK, the live image can be seen in the m.objects canvas.
If the live video is to be displayed at a smaller size, simply insert an image field object into the light curve and use it to set the desired size.
You can also enter the maximum resolution used in the video properties window.
It is often not necessary to output the live video in full resolution, especially in a reduced display. You can often improve performance by reducing the resolution. For example, an internal notebook camera may be able to deliver a better frame rate and therefore a smoother display at a lower resolution.
It is also possible to integrate several live videos at the same time.
In this case, please note that the same source cannot be output multiple times. You must therefore assign several sources to several live videos.
The Remote function is available in m.objects ultimate and in the earlier m.objects pro version.
Of course, you primarily control the picture and sound of your AV show from m.objects. However, you can also remotely control devices with a corresponding interface to the computer from the m.objects timeline. This can be a control for electric blinds to darken the room before the presentation or a fog cannon for special effects. The locator in the timeline can also be started in the opposite direction via an external event. This could be a light barrier, for example, which starts a certain section of the presentation when a room is entered.
To be able to use Remote, you first need special tracks in the timeline, analogous to the image and sound tracks. To do this, click on the gear icon in the toolbar to open the component selection view. From the tool window, drag the symbol for the remote control into the empty gray area below and enter the desired number of tracks. It makes sense to select a separate track for each device to be controlled.
Click on the cogwheel symbol again so that it no longer flashes.
In the next step, set up the required driver. To do this, select Settings / Driver settings, double-click on the driver (Universal) in the following list on the left and select Universal COM port driver. Your selection now appears in the field on the right, which you select by double-clicking and configure in the following window.
The settings required for the configuration can be found in the manual for the device you want to control. Then confirm with OK.
You must now assign the newly set up driver to the track for the remote control. To do this, click on the wrench symbol.
You will now see the view for the driver assignment in front of you. Click to activate the Remote control component, hold down the mouse button and drag the new driver from the tool window onto the track and release the mouse button. The driver is now stored where Shell command driver was previously displayed.
Again, click on the wrench to return to the standard view of the user interface. If the remote control track is activated, you will see theData output tool in the tool window. Double-click on it to open the configuration window where you can enter the required data, which can also be found in the device manual. When connecting a fog cannon, for example, the text "Warm up" could appear here, meaning that the device should be warmed up.
Use the plus symbol in the toolbar to create additional tools for remote control, which you can provide with the corresponding data and commands.
You can now place the tools on the remote control track in the usual way by dragging them with the mouse to the desired positions and thus adapt the control of the device to the timing of your show.
During the course of an m.objects presentation, external programs and files can be started directly from the timeline. For example, you can start an EXE file that was exported from another m.objects presentation at a certain point in your presentation. The locator then stops as long as the EXE file is running and then starts again. This makes it possible, for example, to call up a series of presentations from a timeline without having to start them manually.
In this way, you can also call up any other applications or open files of different formats. There is also always the alternative option of continuing to run the locator or having the m.objects canvas in the foreground.
If a driver is already stored on the remote control track, you must first remove it from the track in the driver assignment(View / Driver assignment or via the wrench symbol).
Using the control function is very simple: Click on one of the remote control tracks. You will see the Open program / file tool in the tool window.
Drag this tool onto a remote control track at the desired location - where the application or file is to be called up. It is stored there as an icon.
Double-click on this symbol to open the corresponding editing window.
Use the Search button to select the application or file to be started here.
You can enter additional parameters for applicationsin theOptional command line parametersline . For example, if you specify a presentation file that you have exported from an m.objects show, you can specify that it should run in a loop, i.e. always start from the beginning. To do this, simply enter "l" or "loop". You can read more about this in the chapter EXE file with call parameters.
Please refer to the relevant documentation to find out which command line parameters are used for other applications.
Below this, specify whether the locator should wait until execution is complete, i.e. until the application terminates itself or is terminated manually, and whether the m.objects canvas should be in the foreground during execution.
Once you have entered these details, confirm with OK.
Repeat this procedure for other applications that you would like to call up elsewhere.
If the remote control component is active, you will find the Control canvas window tool in the tool window.
This allows you to open, close, minimize and restore the canvas - both in windowed mode and in full-screen mode - in a programmed manner.
To use the tool, hold down the mouse button and drag it to the desired position in a remote control track. As soon as the locator reaches this object, the canvas is reopened, closed, minimized or restored, depending on the selected setting.
As minimizing and restoring the canvas works without delay, these commands are well suited to making another application running in the background visible and covering it up again during a presentation.
You can use remote commands to control certain functions of digital projectors directly. m.objects offers drivers for PJLink (network)-compatible projectors (under Universal) a. After assigning the driver in the Settings / Driver settings menu, further tools are available for the Remote control component.
These tools can be used to switch projectors on and off by inserting them into the corresponding position on the remote control track. In addition, the display of the image can be switched on and off (possibly via a shutter, if available in the projector) or a specific signal input can be preselected.
Lighting control / DMX control is available in the m.objects ultimate expansion stage.
DMX is a digital protocol that is mainly used in event and stage technology to control lighting systems and effect devices. The range of DMX applications is wide. For example, individual lamps can be switched on and off or dimmed. You can also control the colors of spotlights that have several colors, and DMX can also be used to control moving heads (moving spotlights) or fog cannons.
For DMX control via a PC, this is first connected to a DMX interface that translates the control signals from the computer into DMX signals. This connection is usually made via USB. The interface in turn is connected to the first device to be controlled using a DMX cable. Each device that is controlled in this way usually has a DMX input and a DMX output to which other devices can be connected if necessary.
The DMX protocol transmits 512 separate channels, each of which can control a specific function. In the case of a multi-colored spotlight, for example, one channel can be responsible for controlling the brightness of each individual color. Each channel in turn has 8 bits and can therefore transmit 256 different values. In this example, the brightness could therefore be controlled in up to 256 levels. How exactly the channels and values are assigned is determined by the manufacturer of the device and can be found in the corresponding documentation.
To be able to work with lighting control / DMX control in m.objects, you must first set up the required drivers. To do this, select Driver settings in theSettings menu. Now click on the + symbol in front of the manufacturer of the DMX interface used in the list on the left-hand side and tick the relevant driver in the branch that opens. When setting up the driver for the first time, the corresponding window opens directly at this point. In this case, you can skip the description of the following two screenshots.
If the driver has already been set up, it will first appear on the right-hand side under Selected drivers.
Double-click on this entry and click on Set up in the following window.
Another window opens.
The value you enter for theChannels option depends on how many and which channels you use for your DMX control.
For example, if you want to control eight dimmable lamps and also a spotlight that has four colors, you need a total of 12 channels: Eight channels for the brightness of the lamps and one for each of the four spotlight colors.
If you now want to use channel 122 for one of the functions, for example, you must also enter this value here so that m.objects can make the corresponding channel available. This means that you enter the number of channels required or the highest channel to be used here.
With the Reset all channels on stop option, all DMX devices are reset to their initial state as soon as m.objects switches to stop mode. This means that if you have switched on a spotlight via m.objects, it will switch off again when you click on the Stop button in the program, press the ESC key or close the show.
Confirm the windows with OKor close.
In the next step, click on the icon with the cogwheels in the toolbar to access the component selection.
There, drag the Light control component from the tool window down into the gray area.
Here you enter as many tracks as you need DMX channels. In the example above, this would be 12 tracks.
Confirm the entry with OK. If an empty pop-up window then appears, you can simply close it. Click on the gear icon again to return to the normal view of the m.objects desktop.
A DMX channel is already assigned to each of the newly created tracks. If you want to change this assignment, click on the wrench symbol in the toolbar.
In the driver assignment view, select the relevant driver and delete it using the Del key. Then drag the desired driver from the tool window onto the track. Click on the wrench again to return to the normal view.
The DMX tracks are now prepared and you can start creating the control system. Double-click on a track to create a new handle. The height of the handle represents the set value of the DMX channel and can be set from 0 to 255. For exact values, it makes sense to enter them manually by double-clicking on the handle to open its properties window and entering the appropriate value there.
You will now see a green bar at the corresponding height on the DMX track, which extends over the entire length of the track. The control becomes active as soon as m.objects is in pause or playback mode. In this case, the control signal would remain active for the entire duration of the presentation.
Now create a handle in front of the existing one and drag it down to the value 0. You have now created a continuous fade-in.
You can set up the termination of the control signal with two further taps.
Alternatively, you will find a range of tools in the tool window (for active DMX tracks) that already contain a fade-in of the DMX signal in different lengths from 0 to 4 seconds. To use one of these tools, simply drag it onto the desired track. There you can change the settings of the handles or add more if required.
When working with the DMX controller, it is also a good idea to save certain constellations that you use repeatedly as macros. This can be, for example, a certain lighting mood that is created with several spotlights or the adjustment of moving spotlights to the positions of changing speakers on stage. For example, a macro can contain the commands for switching on and aligning a spotlight to the first speaker. Another macro contains the control information for panning to the second speaker and a third macro contains the commands for switching off the spotlight during the break or at the end of the presentation.
To save such a sequence as a macro, select all the handles it contains, right-click on one of them and select Create macro in the context menu. Then give the macro a name. The macro is now saved in the tool window and can be dragged onto the tracks like a tool.
You can use call parameters to start m.objects in a specific way. In the chapter EXE file with call parametersyou have already become familiar with such parameters for finished presentation files. The procedure for the modified start of m.objects is very similar.
Once m.objects has been installed, the program icon is available on the desktop for you to start it.
(If it is missing, create a shortcut to the 'mobjects.exe' file in the 'Program Files/m.objects' folder. To do this, right-click on the file and select Create shortcut. Then move the shortcut to the desktop).
Right-click on the start icon and select the properties.
Under Target, you will see the complete path to the mobjects.exe file. Click in this field and position the cursor at the very end of the path after the file name. A call parameter is inserted at this point and begins with a space: C:\Programme\m.objects\ mobjects.exe/wait
The following call parameters are available. You can use either the short or long notation:
/"C:\m.objects data\Show\Project1\ filename.mos"
/m
or/minimize /n or/nosplash /e or/empty
/d=30 or |
Direct call of a specific presentation when starting m.objects; the complete file path is specified. The path should be in quotation marks. Start playback at the saved locator position Exit m.objects at the end of the presentation Open m.objects on the taskbar: The program is in the Windows taskbar while the canvas opens (if the file was saved that way); the image thumbnails in the light curves are not loaded. m.objects opens without a start screen. Start with an empty window, i.e. the last opened file is not loaded. Waiting marks are triggered automatically after a preselected time (seconds); if no time is specified, they are triggered after 2 seconds. m.objects is initialized - e.g. when starting from the autostart directory - but no presentation is loaded yet; useful for multiscreen applications using networked computers, so that all slave computers can log on to the master before the show starts; time in seconds Delay of the program start Delay of the program start in
seconds; |
basic |
live |
creative |
ultimate |
||||
Image tracks |
3 |
3 |
unlimited |
unlimited |
|||
Stereo soundtracks |
3 |
3 |
up to 256 |
up to 256 |
|||
Maximum output resolution |
WQXGA |
Ultra-HD |
unlimited |
unlimited |
|||
internal title generator |
ü |
ü |
ü |
ü |
|||
Mask effects |
ü |
ü |
ü |
ü |
|||
Export of video files (WMV, MPEG-2, MPEG-4 etc.) |
ü |
ü |
ü |
ü |
|||
Export of standalone presentation files (EXE) |
ü |
ü |
ü |
ü |
|||
Maximum resolution of integrated videos |
FullHD |
Ultra-HD |
unlimited |
unlimited |
|||
lossless video trimming |
ü |
ü |
ü |
ü |
|||
Blending effects (QuickBlending) |
ü |
ü |
ü |
ü |
|||
Keyword management |
ü |
ü |
ü |
ü |
|||
Transfer of keywords from Adobe Lightroom |
ü |
ü |
ü |
ü |
|||
Animation: zoom, tracking shot, rotation, 3D animation |
ü |
ü |
ü |
ü |
|||
Color grading per LUT |
ü |
ü |
ü |
ü |
|||
Animation: Passepartout, shadow / glow, blur, reflection |
- |
ü |
ü |
ü |
|||
dynamic slow motion / time lapse |
- |
ü |
ü |
ü |
|||
Expansion stages |
basic |
live |
creative |
ultimate |
|||
Real-time video / image editing (animatable): White balance, brightness, contrast, gamma, hue, tint, sharpness, color grading adjustable |
- |
ü |
ü |
ü |
|||
Video stabilization |
- |
ü |
ü |
ü |
|||
Real-time post-processing |
- |
ü |
ü |
ü |
|||
License transferable |
- |
ü |
ü |
ü |
|||
Commercially usable |
- |
ü |
ü |
ü |
|||
Speaker support for live presentations incl. configurable interface for wireless remote control |
- |
ü |
ü |
ü |
|||
Interactivity through markers and mouse-sensitive image fields |
- |
ü |
ü |
ü |
|||
Multi-channel sound, sound effects, interface to directX plug-ins |
- |
ü |
ü |
ü |
|||
Chroma keying and alpha channel for video |
- |
- |
ü |
ü |
|||
Stereoscopic mode (input and output) |
- |
- |
ü |
ü |
|||
Image evaluation mode |
- |
- |
ü |
ü |
|||
Programmed start of external files |
- |
- |
ü |
ü |
|||
Relay control |
- |
- |
- |
ü |
|||
Multi-field projection via several projectors or screens (can be expanded using the Multiscreen module) |
- |
- |
- |
2 |
|||
DMX lighting control via suitable DMX interface |
- |
- |
- |
ü |
|||
Network connectivity, timecode synchronization, PJLink protocol, wired remote control |
- |
- |
- |
ü |
|||
Integration of live video feed |
- |
- |
- |
ü |
|||
Multiscreen module |
extends the output with ultimate to up to 64 digital projectors or screens (max. 16 on one PC) |
||||||
A PC used for high-quality m.objects presentations should have at least these features.
- Standard PCor notebook with CPU AMD or Intel from 1.5 GHz or Apple Mac with Intel CPU
- MSWindows XP, Vista, Windows 7, Windows 8, Windows 10, Windows 11, 32/64 bit
- 3D graphics card(at least 512 MB video RAM recommended)
- Standard sound card,screen from 1024x768
- Apple computerwith Intel processor, macOS 10.13 High Sierra or higher
- Apple computerswith all M1 and M2 processors developed by Apple and macOS 11 Big Sur or macOS 12 Monterey
Operating system
The use of a 64-bit operating system is highly recommended. m.objects has a special architecture of several independent processes running in parallel. It therefore benefits greatly from its memory management, which is considerably more powerful than that of a 32-bit environment.
Notebook vs. desktop
Notebooks with the appropriate equipment are just as suitable as desktop PCs for demonstrations with m.objects. Due to their compactness, they are of course particularly suitable for mobile use. A digital projector can provide the full-screen presentation on the external monitor output, while the m.objects interface and additional m.objects aids are shown on the display of the device for an overview.
Processor
The performance of the main processor (CPU) is not critical in many areas due to the use of highly optimized algorithms within m.objects. A current CPU is usually only very slightly utilized during the playback of high-resolution images and stereo sound.
macOS:
In this version, the decoding of video material benefits greatly from the performance of the CPU, while all color conversions (YUV -> RGB) and color grading are performed on the graphics chip. An automatic distribution of the computing load across the available computing cores ensures optimum use of the CPU performance. For the use of 4K video and/or particularly high frame rates, it therefore makes sense to use a correspondingly fast processor such as the latest generation Intel i7 / i9 or M1 (Apple Silicon) or its successor (M2, M3); the Pro, Max etc. versions of the processors increase the performance accordingly.
Windows:
Either the computing power for decoding video is provided by the main processor, or a modern graphics processor takes on the main part of the load. Which component is used can be set globally within m.objects or individually for each video. If powerful graphics hardware (see below) is available and the above-mentioned video formats are used, no particularly fast CPU is required for perfectly smooth playback of demanding 4K video material.
However, if the graphics card is older, less powerful or other video formats are to be processed in high resolution (e.g. Apple ProRes), a powerful CPU should be used. Processors with 4 or more cores, such as suitable Intel Core i5, i7 or i9 or correspondingly powerful XEON models, are particularly suitable. Systems with AMD processors (e.g. AMD Ryzen) or other compatible chips can also be used without restrictions, provided they have the required performance. m.objects makes intensive use of the possibility of processing tasks in parallel (multi-threading) on systems with several processor cores.
Graphics chip
Even more important than CPU performance in most presentation applications is the suitability of the graphics card. It is essential that the graphics chip delivers a constant refresh rate, especially for the playback of animations. Pure image transitions are less critical in this respect.
The following graphics units, for example, are well suited for the smooth running of high-resolution digital projection or screen display:
- AMD: For numerous applications, Radeon HD models whose 100 digit has at least the number 6, better 7, e.g. 77x0, 78x0 or the newer graphics chips of the Radeon R7, R9 type, are sufficient.
The newer RX 4xx/5xx/Vega types are currently particularly recommended.
- NVidia (Apple Mac only up to model year 2015): For many applications, GeForce models with a three-digit model number whose 10 digit is 4 or higher are sufficient, e.g. GT74x, GTX 76x etc.
Windows: The newer GTX 1050Ti / 1060 / 1070 / 1080 / 16x0 models are currently particularly recommended. All of these models are capable of decoding 4K video, while higher-performance models such as the RTX 2060 / 2070 / 2080 are of course also very suitable.
- Intel: Systems with Intel i3/5/7/9 and Iris Pro 5200 or the newer HD or UHD 5x0 / 6x0 already provide sufficient performance for many arrangements, so that an extra graphics chip is generally not required here. With the more powerful Intel Iris 5x0 or 6x0, which can be found on some processors from production year 2016, complex arrangements with numerous image tracks can already be reproduced smoothly in Full HD resolution. However, these graphics systems are only suitable to a limited extent for output resolutions higher than Full HD. Newer Intel CPUs with integrated Intel Iris Xe graphics offer even better performance, but even these do not achieve the performance of a current dedicated graphics unit for gaming applications. Older systems with chipset-integrated graphics (e.g. Intel GM945) are only suitable for less demanding presentations.
- Apple M1/M2/M3: The graphics unit installed on the Mx-SoC reaches or even exceeds the performance of some mid-range gaming graphics cards. This is remarkable in view of the fact that it is a CPU-integrated graphics solution that is not very power-hungry. In practice, this performance is delivered to the output device, meaning that M1 and especially M2-based systems can be described as very suitable for most applications. SoCs of the Mx Pro or Mx Max type work considerably faster and can also be used without hesitation for the output of complex presentations on UHD devices.
At https://www.videocardbenchmark.net/high_end_gpus.html you will find a performance comparison of the graphics chips available on the market that are suitable for m.objects. The effective graphics performance of an overall system depends on many parameters, so the choice of a suitable combination of CPU and graphics chip is not the only decisive factor. However, as a rough guide, a G3D mark of at least 1800 is recommended for Full HD presentations under Windows 10, and a G3D mark of at least 6000 for processing 4K video and 4K output. When purchasing a new system, it is of course advisable to have a certain reserve for future developments. For 4K output, please also read the explanations on the connections below.
When purchasing hardware, also pay attention to the card's video memory, which is permanently installed on the graphics hardware and cannot be retrofitted separately. Fast memory technology such as GDDR5 or even GDDR6 offers performance advantages. For simple presentations, you should also look for 512 MB video RAM or more. Projects with numerous image tracks benefit from significantly more video RAM. 2 GB of graphics memory is the minimum when it comes to intensive work with 4K video.
In principle, several video outputs of a graphics card can be operated in different resolutions when using m.objects. This means that a modern notebook with an internal display resolution of 1,920 x 1,080 pixels, for example, can still make optimum use of an externally connected Ultra HD TV with a resolution of 3,840 x 2,160.
For the output of resolutions above 2,560 x 1,600 pixels (e.g. UHD: 3,840 x 2,160, 4K or higher), the device should have an HDMI 2.0 or Displayport (Thunderbolt) 1.2 or newer connection, as otherwise a sufficiently high frame rate (frames/s, fps) cannot be transmitted for smooth playback of animations. Lower resolutions can also be output via standard HDMI, DVI or Displayport of older versions without any loss of quality.
m.objects Präsentationstechnik e.K.
Dahlweg 112
D - 48153 Münster
Technical hotline: +49 (251) 97 43 63 13
Fax +49 (251) 97 43 63 11
We are happy to answer any questions you may have aboutm.objects and are also grateful for any suggestions for improvement.
In particular, you should contact us if you are interested in customizations for special requirements of your installations or if you are a supplier of AV technology and computer peripherals and would likem.objects to provide driver support for your devices.