make money online Multimedia: October 2009

Friday, October 2, 2009

WEB SITE LIFE CYCLE

A web site in many ways resembles other types of corporate information
systems. Each web site has a limited life span, similar to the water fall
software life cycle model. One major difference is the emphasis on content
development in multimedia applications. The phases of web site
development are as follows: idea formulation, general web site design,
detailed design of web site, testing of an implementation and maintenance.
(1) Idea Formation: During the idea formulation phase, specific targetmarketing
program, content goals and objectives must be set. Since a web
site development project can become very time consuming and a major
capital investment to owners of small businesses, it may be more effective to
identify opportunity of specialized markets big companies have ignored.
Furthermore, the profile of netizens must be carefully studied [Choi99] to
find out who is surfing the net and what these people are looking at. Small
businesses should be aware of the dynamics of the on-line market place and
develop strategies and plan accordingly. The ideas of this phase can pave the
foundation for developing a comprehensive plan for web site design.
(2) Web Site Design: Web site should be integrated into the company's
backbone information system so that the web site can grow along with the
business. To be successful, companies must integrate e-commerce into their
overall business strategies and processes. Moreover, content needs to be
targeted to specific user's needs. Visitor's information should be collected so
that the company will be able to tailor the web pages to the specific needs of
the interested customers. Furthermore, it is important that the web site can be
surfed fast and efficiently. In addition, the users should be involved by
providing an opportunity for them to input suggestions and complaints. The
development of navigational cues and the user interface is of critical
importance. The actual design tasks can be out-sourced for a small company.
Also a new web site should be linked to as many search engines as possible.
This can increase the chance that the web site is visited. Financial
infrastructure should be developed properly as well.
(3) Testing: Once the implementation is complete, the company should
conduct a pilot to test its integrity and effectiveness. The pilot provides an
opportunity to obtain feedback from functional groups, customers and
business partners. It ensures the quality and usability of the site.
(4) Maintenance: It is essential that new content is developed and the web
site is kept refreshed. Timeliness is the key on the web. Moreover,
appointing a web master to manage the site on a day-to-day basis is
imperative. Web master can trouble-shoot any error such as a link to a
defunct web address, track the traffic of the web site, use reader feed back to
build a loyal following and ensure server maintenance and security. Also,
this person or persons should make sure that the company's web site supports
the latest versions of popular browsers.
HIGH PRESENCE AND HIGH TOUCH

Internet and multimedia are changing the rules of the economy and
redefining our businesses and our lives. It is destroying solutions such as
mass production, segmented pricing, and time and distance for big
businesses. A company can develop a web page and advertising campaign
and quickly compete in the world market. This has led to the flattening of the
economy, whereby established companies and individuals doing business on
their own can compete on an equal plane. The small companies that succeed
in challenging the large companies are the ones who can maintain a global
presence and yet make people feel that they are personal and easy to deal
with.
(1) Small companies can interact closely with their customers, so that the
customers feel that they are able to communicate to the small company what
they need, as opposed to the customers merely accepting the mass-produced
product that large companies will sell and not give much ground for
derivation from the product.
(2) Web changed from just a means of advertising, to a medium to
rapidly exchange ideas with potential customers. Since the small company
listens to what they say, it not only results in having a satisfied (and probably
a faithful) customer but increases sales significantly with time.
(3) The Internet's primary advantage in advertising is not so much in
attracting attention and conveying a brief message (the tasks assigned to
traditional advertising media), but lies instead in delivering in-depth, detailed
information. Its real power is the ability to provide almost infinite layers of
detail about a product or service, interactively, at the behest of the user.
However, small companies have to work smarter and respond more
quickly [Murr98]. They have to avoid mistakes and make the best of
possible use to everything. Corporations with big budgets can afford to lose
their investments, while a small company looks at web as survival, not as an
investment.
The small businesses also need to realize having a web site does not
automatically mean that the company will reach millions of potential
customers. It simply means that there is the potential to reach millions of
potential customers. Company has to promote the site through
advertisements, e-mail, links to other sites, and cutting edge multimedia
technology to attract lots of visitors. For a new start-up small company, a
brand new idea is always crucial. Second, multimedia technology should be
used to provide various kinds of services on the web site. Third, once the site
starts catching on and e-mails start rolling in, more and more person hours
should be put into keep up with it all.
Metadata consumption at the proxy

Multimedia adaptation is a key technology for
assuring the quality of end-to-end delivery in the
network. Adaptation can dynamically react on
different kinds of presentation devices and on
unpredictable resource availabilities. Proxy
servers situated in the middle of the delivery
chain constitute an ideal place to control multimedia
delivery and adapt the stream if resource
availability changes. In this context, we built the
Quality-Based Intelligent Proxy (QBIX), which is
a terminal capabilities- and metadata-aware
mapped (RTSP; http://www.rtsp.org) proxy server
that supports real-time adaptation in the compressed
(temporal adaptation only) and
uncompressed domains. Adaptation improves the
hit rate in the proxy cache and lets the proxy act
as a media gateway supporting transcoding in real
time based on metadata.9 One part of the metadata
is sent by the server and describes the video
variations (MPEG-7 VariationSet descriptions),
and the other part is provided by the terminal end
user (in the form of Composite Capabilities/Preferences
Profiles, which we describe later in this article) sending terminal capabilities to the proxy.
The proxy extracts the terminal capability
information from the video request and checks if
it has this music video in its cache in a quality
that matches the terminal capabilities. If the
video is already available, but its properties aren’t
in accordance with the terminal’s characteristics—
for example, the video’s bit rate is too high
to be consumed at the end user’s site—the MPEG-
7 descriptions that accompany the video are
examined by the proxy. They contain hints on
which variations of the original video should be
selected or created (if one doesn’t exist) to meet
the delivery and presentation constraints. These
hints contain the video’s expected size, bit rate,
and quality. The proxy then chooses from among
these variations the one with the highest quality
that meets the restrictions of the end user’s terminal.
The proxy can either load the required
variation from the server or generate it with a
sequence of appropriate transcoding procedures.9
We implemented metadata consumption at the
proxy and the terminal using our Video ToolKit
(ViTooKi). ViTooKi is principally a set of libraries
that support adaptive standard-compliant video
streaming, transcoding, and proxy caching. It supports
MPEG-1/2/4/7/21 and MP3/AAC file types,
stored in various containers like mp4 and avi using
standard protocols with retransmission and the
server, proxy, and player included.
Multimedia Data Cartridge

The MPEG-7 MDC is at the center of the metadata
life cycle because it must manage all the metadata
produced and deliver it to the consuming
elements. The MDC is an extension of an Oracle
database that can store, index, and query multimedia
metadata based on the MPEG-7 standard. It
currently consists of three main parts (see Figure 3,
next page). The core system consists of a multimedia
database schema based on MPEG-7, the Multimedia
Indexing Framework (MIF) supporting
query optimization, and a set of internal and external
libraries for incoming requests and queries.
The MDC has been implemented by a small
group of database programmers with experience
working with the Oracle DBMS kernel. They kept
the extensions to the Oracle database as modular
as possible. Part of these modules, mainly the
indexing framework, is in preparation of a
SourceForge project (http://sourceforge.net/
index.php), which CODAC plans to make available
to the public in Spring 2005.

MPEG-7-based database schema

The multimedia schema relies on the MPEG-7
standard to provide a complete multimedia metadata
schema for low-level descriptions (such as color, texture, and shape for images) and highlevel
descriptions (such as structural and semantic
descriptions) for all media types. We have mapped
the MPEG-7 descriptors, formulated as XML-types,
to object-relational tables to enable fine-grained
querying on these data. (A detailed explanation of
the mapping is available elsewhere.7)

Library support

A set of internal and external libraries are used
for incoming requests and queries. The set of
internal libraries is used as access points to the
core system and consists of InitLib, used for creating
new instances of the MDC data type;
InsertLib, which provides insert functionality of
MPEG-7 documents; DeleteLib, for deleting
MPEG-7 documents; UpdateLib, for updating
parts of stored MPEG-7 documents; and QueryLib,
for query services. Furthermore, external libraries
are used to offer application-specific services.
The services we described in the use case scenario
are VideoLib, for obtaining videos with
semantic search criteria, and AudioLib, for querying
with the humming functionality. Both external
libraries (VideoLib and AudioLib) rely on the
search functionality of the QueryLib, which is
basically a translation of search criteria to complex
SQL and XPATH statements on the schema tables.
Video variation and metadata for content
adaptation

The annotation framework automatically produces
metadata for video variation as a means for
adaptation. In video variation, the hope is to
generate new videos (variation videos) from the
source video, with a reduced data size and quality
by applying a variation or reduction method.
The supported variations include
temporal variation, which reduces the visual
data’s frame rate through frame dropping;
spatial variation, which encodes fewer pixels
(pixel subsampling) and thereby reduces the
level of detail in the frame; and
color variation, which reduces each pixel’s
color depth.
The CODAC project has defined and implemented
two new methods of video variations:5
Object-based variation extracts the foreground
and background objects in a source video and
re-encodes the video with two visual objects.
This facilitates object-based adaptation, which
otherwise would be impossible for a video
encoded without objects. For instance, in our
use case scenario, object-based variation
would be useful for segments with a dynamic
singer foreground and static image background.
This lets an adaptation engine discard,
in case of resource bottlenecks, the static
background image.
❚ Segment-based variation lets us apply variation
methods selectively for video segments based
on the physical characteristics of motion,
texture, and color, thereby minimizing the
quality loss and/or maximizing the reduction
in data size. We segment the source video into
its component shots and automatically select
and apply a set of appropriate methods by
analyzing the degree of physical characteristics
within the segment.
We accomplish variation creation by implementing
a server module called the Variation Factory,
which generates the variations.5 All the
variation methods—including the newly defined
object and segment-based variation methods—are
implemented in the Variation Factory. A source
video can be subjected to a single method or a
combination of them as deemed necessary—for
instance, temporal variation followed by spatial
variation, and so on. At each step, the Variation
Factory produces a variation video, thereby creating
a tree of variations. The Variation Factory is
an application programming interface (API)
including user interfaces to guide the metadata
generation process. The user can control the
adaptation results and rearrange parameters—for
instance, if the perceived quality is too low.
The Variation Factory’s architecture is developed
in Java under Java Media Framework (JMF).
The API supports the integration of audio and
video playback into Java applications and applets.6
The input is video, and the outputs are one or
more adapted variation videos and an MPEG-7
metadata document that describes the source and
the variation videos using descriptors of the VariationSet
description scheme. The MPEG-7 Document
Processor produces metadata. It uses Java
API for XML processing (JAXP), which is generally
used to parse and transform XML documents.
First, a Document Object Model tree is constructed.
Then, the DOM tree is parsed and an
XML text file is produced. The descriptors
include information on the variations’ fidelity,
data size, and priority. We store the created variation
videos in the media server along with the
source video and the MPEG-7 document in the
metadatabase. During delivery, the MPEG-7
metadata document is streamed together with
the requested video.
The Life Cycle of Multimedia
Metadata

During its lifetime, multimedia content
undergoes different stages or cycles from
production to consumption. Content is created,
processed or modified in a postproduction stage,
delivered to users, and finally, consumed. Metadata,
or descriptive data about the multimedia
content, pass through similar stages but with different
time lines.1 Metadata may be produced,
modified, and consumed by all actors involved
in the content production-consumption chain
(see the “Life-Cycle Spaces” sidebar for more
information). At each step of the chain, different
kinds of metadata may be produced by highly
different methods and of substantially different
semantic value.
Different metadata let us tie the different multimedia
processes in a life cycle together. However,
to employ these metadata, they must be appropriately
generated. The CODAC Project, led by
Harald Kosch, implements different multimedia
processes and ties them together in the life cycle.
CODAC uses distributed systems to implement
multimedia processes. Figure 1 gives the architectural
overview of this system.
The project’s core component is a Multimedia
Database Management System (MMDBMS),2
which stores content and MPEG-7-based metadata.
3 It communicates with a streaming server
for data delivery. The database is realized in the
Multimedia Data Cartridge (MDC)—which is an
extension of the Oracle Database Management
System—to handle multimedia content and
MPEG-7 metadata.

Use case scenario

To demonstrate metadata’s life cycle, let’s consider
a use case scenario. A user watches an interesting
music video on a colleague’s screen and
wants to retrieve the same music video. The only
information she retained was that the singer in
the music video was Avril Lavigne, and she
remembers the song’s melody. She can’t ask her
colleague directly, so she wants to access it from
a multimedia database.
The query service of our Multimedia Data Cartridge
(MDC) offers a solution for finding such a
music video. First, the user can enter a query by
thematic means, thus specifying that the music
video’s singer is Avril Lavigne. (See Figure 2a, p. 80,
for the query interface.) In response to the first
query, the service returns several music videos that
pertain to the singer. To narrow the search, the
user can hum the melody to the query with a
humming service, as Figure 2b shows.
The multimedia database retrieves information
on the music video that meets the query request
and delivers it to the user. Such information
includes the full title, full information on the
singer, the production date, and so on, and finally
the address of the media server where the user
can obtain the video. Thus, the user finds out that
the song she has searched for is entitled “Skater
Boy” and can now access the music video.
Luckily, the user is registered to the media
server storing the video and can request the
music video from this server. In addition, the
user specifies in the video request her mobile
device’s terminal capabilities. Unfortunately, the
media server has only copies of the music video
in a quality that doesn’t meet the terminal constraints.
The server examines metadata generated
in postprocessing of the video to generate a
variation of the video with the best possible quality
satisfying the constraints and then delivers
this variation.
Let’s further assume that the request goes over
an authorized proxy cache that examines if a
cached copy is present. If it’s there, but not in the
appropriate quality, delivery-related metadata
describing the possibility for the video to be
adapted to resource constraints is used by the
proxy cache to adapt the video accordingly.
How sound is converted into digital data

When analog information (such as from a CD or live recording) is converted into
digital data, the information is captured and measured into Hertz. The more Hertz you
can capture, the more accurate your recording will be. For example, a sampling rate of
44 kHz (kiloHertz) is far more detailed than a sampling rate of 11 kHz. It is easy to
compare this to a digital picture. The higher the scanning quality, the more detailed the
picture will be. Of course, file size will also increase with higher sampling rates. When
preparing audio for multimedia output, especially web output, it is unnecessary to go
beyond a 22 kHz sampling rate. The reason for this is that this is the average frequency
that is audible to the human ear. Some highs and lows will be lost; however, it is difficult
to detect this, especially with the quality of speakers found on most computers today.
Another technicality to be concerned about when sampling sounds is bit depth.
Bit depth is essentially the dynamic range (or amplitude) of a piece. Bit rate also controls
the resolution of the sound wave (higher bit resolution results in a smoother wave). For
example, a highly structured classical piece would have a high bit depth, because of the
great dynamic differences between a solo flute section and a full orchestral section.
Many pop tunes have less of a bit depth, because they are composed on a more equal
dynamic range. When sampling music for web format, a 16-bit rate is adequate. For
many instances, 8-bit rate may work as well and decrease the file size.


Multi-track recording software

When recording and mixing audio, perhaps the first thing to consider is whether
to go with a Digital Audio Workstation (DAW) or strictly computer software. Digital
Audio Workstations are devices sold as hardware that attach to your computer, allowing
you to store and mix audio through an outside device. Some common components are
mixing boards, multi-track recording devices, and CD burners. The advantages to this is
that processing is usually faster, allowing for more storage space of digital information.
However, the price of DAWs are usually higher than the software versions. Some
common manufactures of DAWs are Tascam and Mackie Designs. Perhaps the top
professional multi-track recording tool is Digidesign's Pro Tools. Digidesign's previous
price of $8,000 made it impossible for private studios and desktop/home studio recorders
to own and operate the equipment. However, this has all changed with the release of
Digi001, which is a simpler version including both the software and hardware interface
for under $1,000. The hardware component is a single-space box and a PCI card that
works with both Mac and PC. A downloadable pro tools software version is also
available for free. Some of the clients who use Digidesign Pro Tools include Nine Inch
Nails, Björk, Smash Mouth, Philip Glass, Third Eye Blind, Paramount Pictures, Canadian
Broadcast Corporation, and such movie projects as Nutty Professor II, The Perfect Storm,
The Matrix, American Beauty, and The Prince of Egypt.
Multi-track software packages are perhaps the most popular, varying in price and
editing power. Some common programs include Samplitude, which samples at 24 bits
and 96 kHz (Windows $69-$399), Vegas™ Audio 2.0 (Windows, $449) which has
unlimited tracks,18 effects, and output options such as streaming media files (WMA and
RM format). Cakewalk® Pro Suite™ is also a popular program (Windows $429-$599).
Cakewalk differs from other programs in that it also allows for MIDI recording and
editing as well as multi-track recording and editing.