US20140174279A1 - Composition using correlation between melody and lyrics - Google Patents
Composition using correlation between melody and lyrics Download PDFInfo
- Publication number
- US20140174279A1 US20140174279A1 US14/095,019 US201314095019A US2014174279A1 US 20140174279 A1 US20140174279 A1 US 20140174279A1 US 201314095019 A US201314095019 A US 201314095019A US 2014174279 A1 US2014174279 A1 US 2014174279A1
- Authority
- US
- United States
- Prior art keywords
- data
- notes
- words
- pattern
- subset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 239000000203 mixture Substances 0.000 title claims description 42
- 238000000034 method Methods 0.000 claims abstract description 58
- 238000005065 mining Methods 0.000 claims description 10
- 230000001755 vocal effect Effects 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 6
- 238000009826 distribution Methods 0.000 claims description 4
- 230000002123 temporal effect Effects 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims description 2
- 238000003860 storage Methods 0.000 description 23
- 238000004891 communication Methods 0.000 description 14
- 230000008569 process Effects 0.000 description 12
- 230000007704 transition Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 239000011295 pitch Substances 0.000 description 8
- 239000002609 medium Substances 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 230000033764 rhythmic process Effects 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 241001672694 Citrus reticulata Species 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000005055 memory storage Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
- 239000006163 transport media Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
- G10H1/0025—Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/101—Music Composition or musical creation; Tools or processes therefor
- G10H2210/145—Composing rules, e.g. harmonic or musical rules, for use in automatic composition; Rule generation algorithms therefor
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2220/00—Input/output interfacing specifically adapted for electrophonic musical tools or instruments
- G10H2220/005—Non-interactive screen display of musical or status data
- G10H2220/011—Lyrics displays, e.g. for karaoke applications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/121—Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods
- G10H2240/131—Library retrieval, i.e. searching a database or selecting a specific musical piece, segment, pattern, rule or parameter set
Definitions
- This disclosure relates to systems, methods, and algorithms that automatically generate a melodic composition of a song.
- algorithmic composition There are many studies that have proposed algorithms for composing the melody of a song automatically, which is known as algorithmic composition. Algorithms (or, at the very least, formal sets of rules) have been used to compose music for centuries. The term is usually reserved for the use of formal procedures to make music without human intervention, either through the introduction of chance procedures or the use of computers. While many studies have been done, various techniques have their respective limitations, and thus an improved algorithmic composition system is desired.
- a method comprising receiving, by a system comprising a processor from a data store, tone data determined from a set of songs represented by a set of notes and a set of song lyrics represented by a set of words, wherein the tone data is selected from the data store based at least on first correlation data that correlates the set of notes to the set of words; determining, by the system, a pattern at least based on a correlation between a subset of the songs represented by a subset of the notes and a subset of the song lyrics represented by a subset of the words; creating, by the system, a composition model based at least on the pattern; generating, by the system, a melody based at least on the composition model; and pairing, by the system, the melody at least to the subset of the song lyrics.
- the method can further comprise analyzing, by the system, respective key signatures comprising respective major scales or respective minor scales of respective songs of the set of songs based at least on respective frequency distributions of respective sets of notes associated with the respective songs of the set of songs.
- the method can further comprise matching, by the system, respective musical syllable identifiers to letters representing respective notes of the set of notes.
- the method can further comprise assigning, by the system, respective tone data values to respective syllable segments associated with respective words of the set of words based at least on second correlation data that correlates the tone data to the syllable identifiers from the data store.
- FIG. 1 illustrates a non-limiting example of syllables associated with a word and the tonal stresses associated with respective syllables.
- FIG. 2 illustrates a non-limiting example of a song lyric and a song melody.
- FIG. 3 illustrates an non-limiting example of a system for generating a melody based on the lyric-note correlation between the notes and lyrics of a song.
- FIG. 4A illustrates an example non-limiting probabilistic automaton in connection with generating a song melody.
- FIG. 4B illustrates an example non-limiting tone input data sequence in connection with generating a song melody.
- FIG. 5 illustrates an non-limiting example method for generating a melody in connection with a set of song lyrics and a set of notes.
- FIG. 6 illustrates an non-limiting example method for generating a melody in connection with a set of song lyrics and a set of notes.
- FIG. 7 is a block diagram representing an exemplary non-limiting networked environment in which the various embodiments can be implemented.
- FIG. 8 is a block diagram representing an exemplary non-limiting computing system or operating environment in which the various embodiments may be implemented.
- algorithmic composition can consider not only the temporal correlation among all notes (or sounds) of the melody in the song, but also the lyric-note correlation between the notes and the lyrics in the song.
- a model is used to take into lyrics of existing songs and incorporate the correlation between song notes and song lyrics to generate a melody.
- a model is used to consider song patterns, tones, lyrics and songs of different languages to generate such melodies.
- this disclosure relates to a method for automatically composing a musical melody by taking into consideration correlations and relationships between a song melody and lyric.
- algorithmic composition can thus consider not only the temporal correlation among the notes (or sounds) of the melody in the song but also the lyric-note correlation between the notes and the lyrics in the song.
- the existing approaches to algorithmic composition do not take into account the lyric-note correlation due to the absence of lyrics in such algorithmic composition studies.
- the lyric-note correlation corresponds to the correlation between the changing trend of a sequence of consecutive notes (also referred to as a set of notes) and the changing trend of a sequence of consecutive corresponding song lyrics (also referred to as a set of song lyrics) represented by a sequence of consecutive corresponding words.
- the changing trend of a sequence of notes corresponds to a series of pitch differences between every two adjacent notes since each note has its pitch (or its frequency).
- the changing trend of a sequence of words (wherein each word can be segmented into one or more syllable) corresponds to a series of tone differences between every two adjacent syllable since each syllable has its tone. For example, turning now to FIG. 1 , FIG.
- each syllable is spoken in one of the three kinds of stresses or tones, namely the primary stress, the secondary stress and the non-stress.
- the primary stress is a sound associated with utterance of a syllable with a higher frequency
- the secondary stress is a sound with a lower frequency
- the non-stress is a sound with the lowest frequency.
- the third syllable (e.g., “na” at 106 ) corresponds to the primary stress
- the first syllable corresponds to the secondary stress (e.g., “in” at 102 )
- each of the other syllables corresponds to the non-stress (e.g., “ter” at 104 , “tion” at 108 , or “al” at 110 ).
- tones which are steady periodic sounds often characterized by duration, pitch, intensity and timbre, appear in many languages in the world in addition to English.
- Mandarin there are four or five tones and each word has only one syllable.
- Cantonese there are six tones and each word also has only one syllable.
- Other languages with tones include That language, Vietnamese, and so on.
- the lyric-note correlation can relate to algorithmic composition of a melody according to lyrics expressed in any number of languages.
- a melody composer called T-Music also referred to as “the system”
- T-Music also referred to as “the system”
- the first phase is a preprocessing phase which first finds lyric-note correlations based on a database or data store that stores numerous existing songs each of which involve both the song's melody and the song's lyric by performing a frequent pattern mining task of the song data stored at the data store.
- the songs identified via the frequent pattern mining task are identified based on the lyric-note correlations and can be used to, build a Probabilistic Automaton (herein referred to as “PA”).
- the second phase is a melody composition phase which generates a melody given a lyric by executing the PA generated in the first phase.
- the system can access a robust knowledge source for melody composition in that the system utilizes not only an existing song database (stored at the data store), but also utilizes the tone information of the given lyric.
- the system is highly user-friendly wherein a user who does not have much knowledge about music and does not know how to choose a suitable melody composition algorithm can still generate a melody by using the system.
- the user can gain a personal and convenient experience by using the system, wherein a melody can often be generated automatically based on a lyric written by the user.
- a song can be accompanied by song lyrics wherein the lyrics are a set of words.
- a set of song lyrics can be comprised of numerous lyric fragments also referred to as a subset of words (e.g., one or more words in a sequence).
- each respective word can be comprised of various tones and accordingly each syllable of a word is associated with a respective tone (e.g., primary stress, secondary stress, or non-stress).
- T the total number of tones.
- each tone is associated with a tone identifier, also referred to as a tone ID ⁇ [1, T].
- a segment of a melody is illustrated wherein the melody is represented by a sequence of notes, and at 204 a lyric is illustrated which is represented by a sequence of words.
- An entire song can comprise a set of lyrics and a set of notes, wherein the melody is represented by the set of notes in sequence.
- Each note is associated with a pitch, wherein the pitch denotes the frequency of the sound that corresponds with the note, and its duration of the sound (e.g., the interval of time of the sound).
- a note can be characterized by a pitch and duration.
- a lyric illustrated at 204 , is defined as a sequence of words and each word is comprised of one or more syllables. Furthermore, in an aspect, each syllable is associated with a tone ID. Thus, each lyric can be represented by a sequence of tone IDs for the lyric.
- a song can be represented in the form of a sequence of 2-tuples each in the form of (note, tone ID). The song representation can be referred to as an s-sequence.
- a specific (note, tone ID)-pair can be referred to as p.note (e.g., the note element) and as p.tone (e.g., the tone element).
- FIG. 3 illustrated is a system presenting the architecture of T-Music. Illustrated at FIG. 3 is system 300 comprising various components including a memory 324 having stored thereon computer executable components, and a processor 326 configured to execute computer executable components stored in the memory.
- a song database 302 stores songs and data associated with such songs.
- the system 300 is comprised of a Phase I subsystem that employs tone extraction component 308 , frequent pattern mining component 310 , frequent patterns 312 , and probabilistic automaton building component 314 .
- data store 304 stores tone data, data values, tone look-up tables that comprise mapping between the syllable of each word and the tone ID.
- system 300 For each song of a set of songs stored at the song database and each lyric associated with a respective song, system 300 employs tone extraction component 308 to extract tone data. Furthermore, in an aspect, tone extraction component 308 identifies the tone sequence and thus the s-sequence for each respective song. In another aspect, frequent pattern mining component 310 determines the frequent patterns 312 associated with the set of songs based on the identified s-sequences. In an aspect, the frequent patterns 312 correspond to the lyric-note correlation. In another aspect, system 300 also employs probabilistic automaton building component 314 that builds a Probabilistic Automaton (PA) based on the frequent patterns 312 .
- PA Probabilistic Automaton
- system 300 is comprised of a Phase II subsystem, wherein the data store 304 , lyric input component 306 , tone extraction component 308 , tone sequence component 318 , and melody composition component 320 are components employed by the Phase II subsystem.
- the memory 324 , data store 304 , and processor 326 are employed by both Phase I and Phase II subsystems.
- the lyric input component 306 can store a set of lyrics representing a variety of languages.
- system 300 via tone extraction component 308 extracts the tone sequence from one or more lyrics received from lyric input component 306 .
- system 300 employs melody composition component 320 that generates a melody based on the PA and the extracted tone sequence.
- system 300 employs frequent pattern mining component 310 that determines the frequent patterns 312 associated with the set of songs based on the identified s-sequences.
- the act of frequent pattern mining can be described using representations.
- D be the set of s-sequences corresponding to the songs stored at the song database component 302 .
- S be a s-sequence.
- the length of S, is denoted by
- S[i, j] represents the s-sequence comprising (note, tone ID)-pairs which occur between the i th position and the j th position in S.
- S[1,m] corresponds to S itself, where m is the length of S.
- S′ ((n′ 1 , t′ 1 ), . . . , (n′ m′ , C m′ )
- S ⁇ S′ which is defined as the s-sequence of ((n 1 , t 1 ), . . . , (n m , t m ), (n′ 1 , t′ 1 ), . . .
- S′ is referred to a sub-string of S if there exists an integer i such that S[i, i+m′ ⁇ 1] is exactly S′ , where m′ is the length of S′. It is defined that a support of a s-sequence S wrt D to the number of s-sequences in D that have S as its sub-string. Given a threshold ⁇ , the frequent pattern mining component 310 identifies s-sequences S with its support wrt D at least ⁇ . An algorithm is adopted for finding frequent sub-sequence/substring mining. For each frequent s-sequence S, its support is maintained, denoted by S.T.
- FIG. 4 illustrated is another aspect of system 300 wherein system 300 employs probabilistic automaton building component 314 that builds a Probabilistic Automaton (PA) based on the frequent patterns 312 .
- Probabilistic Automaton (PA) is a generalization of Non-deterministic Finite Automaton (NFA). NFA is designed for lexical analysis in automata theory.
- NFA can be represented by a 5-tuple (Q, , ⁇ , q 0 , F), where (1) Q is a finite set of states, (2) is a set of input symbols, (3) ⁇ is a transition relation Q ⁇ ⁇ P(Q), where P(Q) denotes the power set of Q, (4) q 0 is the initial state and (5) F Q is the set of final (accepting) states.
- PA generalizes NFA in a way such that the transitions in PA happen with probabilities.
- the initial state q 0 in NFA which is deterministic, is replaced in PA with a probability vector v each of which entries corresponds to the probability that the initial state is equal to a state in Q.
- a PA with a 5-tuple (Q, , ⁇ , v, F), where Q, and F have the same meanings as their counterparts in an NFA, and each transition in ⁇ is associated with a probability.
- T be the sequence of tone IDs extracted from the received lyric.
- An example of the sequence can be (2, 1, 3, 5) (Illustrated at the first row 420 in FIG. 4(B) ).
- the probabilistic automaton building act performed by probabilistic automaton building component 314 is described wherein a PA is constructed that is represented by (Q, , ⁇ , v, F).
- Q is constructed to be the set containing s-sequences S that satisfy the following two conditions: (a) S has its length equal to l, where l is a user given parameter and (b) S′ ⁇ D such that S is a sub-string of S′.
- ⁇ is constructed as follows: ⁇ is initially to be . Then, for each pair of a state q ⁇ Q and a symbol t ⁇ , the following two steps are performed. First, a set of states are found, denoted by Q q,t , such that each state q′ in Q q,t satisfies the following: (1) q′[1:1 ⁇ 1] is exactly the same as q[2:1] and (2) q′ [1].tone is exactly the same as t.
- the probability that the initial state is q is set to be q. T/ q ⁇ Q q.T.
- F is constructed as . This is because the termination of the execution on the PA in the melody composition is not indicated by the final states. Instead, it terminates after tone IDs in T have been inputted, where T is the sequence of tones extracted from the input lyric.
- FIG. 4(A) is an instance of a PA. In the figure, omitted is the duration for simplicity.
- the arrow from a state to another means a transition and the number along the arrow is the input symbol in corresponding to the transition.
- the number within the parentheses is the probability associated with the corresponding transition.
- system 300 generates a melody via melody composition component 320 .
- melody composition component 320 generates a melody by executing the PA constructed by the probabilistic automaton building component 314 with the input of the tone sequence extracted from the input lyric, i.e., T.
- the melody generated by system 300 which is a sequence of notes, is represented by (q 1 [1].note, q 1 [2].note, . . . , q 1 [l].note) ⁇ (q 2 [l].note) ⁇ (q 3 [l].note) . . . , ⁇ (q n [l].note).
- q i [2:1] is exactly the same as q i+1 [1:1 ⁇ 1] since there exists a transition from q i to q i+1 in ⁇ for 1 ⁇ i ⁇ n ⁇ 1.
- melody composition component 320 executes the PA as illustrated in FIG.
- system 300 can generate notes at the end of each phase according to this cadence principle. In particular, when notes are generated at the end of a phase, the notes related to the cadence are considered instead of all possible notes.
- rhythm can be used for generating the melody.
- the last note of a phase should be longer.
- the rhythm of a phase is similar to the rhythm of some of the other phases.
- one part in the melody is usually similar to the other part so that the song has a coherence effect.
- system 300 can also incorporate this concept. Specifically, whenever another phase for the melody is generated, it is investigated as to whether some portions of the melody generated previously can be used to generate the new portions of the melody to be composed automatically. If yes, some existing portions of the melody are used for the new portions. The criterion requires investigation as to whether each existing portion of the melody together with the portion of the lyric can be found in the frequent patterns mined in Phase 1.
- vocal range some vocal ranges, such as those of a human, are considered bounded (e.g., at most two octaves).
- the vocal range is the measure of the breadth of pitches that a human voice can sing. Based on the vocal range, system 300 can restrict the possible choices of notes to be generated whenever it executes the PA.
- FIGS. 5 and 6 illustrated are methodologies or flow diagrams in accordance with certain aspects of this disclosure. While, for purposes of simplicity of explanation, the disclosed methods are shown and described as a series of acts, the disclosed subject matter is not limited by the order of acts, as some acts may occur in different orders and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology can alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all illustrated acts may be required to implement a method in accordance with the disclosed subject matter. Additionally, it is to be appreciated that the methodologies disclosed in this disclosure are capable of being stored on an article of manufacture to facilitate transporting and transferring such methodologies to computers or other computing devices.
- tone data is received, by a system comprising a processor from a data store, wherein the tone data is deter mined from a set of songs represented by a set of notes and a set of song lyrics represented by a set of words, wherein the tone data is selected from the data store based at least on first correlation data that correlates the set of notes to the set of words.
- the system analyzes, respective key signatures comprising respective major scales or respective minor scales of respective songs of the set of songs based at least on respective frequency distributions of respective sets of notes associated with the respective songs of the set of songs.
- the system matches respective musical syllable identifiers to letters representing respective nots of the set of notes.
- the system assigns respective tone data values to respective syllable segments associated with respective words of the set of words based at least on second correlation data that correlates the tone data to the syllable identifiers from the data store.
- a pattern is determined by the system, wherein the pattern is at least based on a correlation between a subset of the songs represented by a subset of the notes and a subset of the song lyrics represented by a subset of the words.
- a composition model based at least on the pattern is created by the system.
- the pattern is a sequence of two-tuples, wherein a first tuple element is a note comprising a pitch and duration, a second tuple element is a tone identifier, and the sequence of two-tuples is represented as an association of the note and the note identifier.
- a melody based at least on the composition model is generated by the system.
- the system pairs the melody at least to the subset of the song lyrics.
- the pairing comprises pairing the melody to the set of song lyrics.
- exemplary method 600 of the disclosed systems is stored in a memory and utilizes a processor to execute computer executable instructions to perform functions.
- the system comprising a processor, receives from a data store, the subset of the notes, wherein the subset of the notes represents a major scale or a minor scale.
- the system extracts tone data associated with the subset of the words and the subset of notes.
- the system maps the tone data to the melody based on the first pattern or the second pattern, where the first pattern is a pattern based on a song composition in a major scale and the second pattern is a pattern based on a song composition in a minor scale.
- the system selects a value of the tone data value that is most frequently occurring with regard to respective syllable segments associated with respective words of the subset of the words.
- a melody based at least on the composition model is generated by the system.
- the composition model is a probabilistic model based on at least one of the pattern, the first pattern, or the second pattern.
- the system pairs the melody at least to the subset of the song lyrics. In an aspect, the pairing comprises pairing the melody to the set of song lyrics.
- a suitable environment 700 for implementing various aspects of the claimed subject matter includes a computer 702 .
- the computer 702 includes a processing unit 704 , a system memory 706 , a codec 705 , and a system bus 708 .
- the system bus 708 couples system components including, but not limited to, the system memory 706 to the processing unit 704 .
- the processing unit 704 can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as the processing unit 704 .
- the system bus 708 can be any of several types of bus structure(s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MSA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Card Bus, Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), Firewire (IEEE 1394), and Small Computer Systems Interface (SCSI).
- ISA Industrial Standard Architecture
- MSA Micro-Channel Architecture
- EISA Extended ISA
- IDE Intelligent Drive Electronics
- VLB VESA Local Bus
- PCI Peripheral Component Interconnect
- Card Bus Universal Serial Bus
- USB Universal Serial Bus
- AGP Advanced Graphics Port
- PCMCIA Personal Computer Memory Card International Association bus
- Firewire IEEE 1394
- SCSI Small Computer Systems Interface
- the system memory 706 includes volatile memory 713 and non-volatile memory 712 .
- the basic input/output system (BIOS) containing the basic routines to transfer information between elements within the computer 702 , such as during start-up, is stored in non-volatile memory 712 .
- codec 705 may include at least one of an encoder or decoder, wherein the at least one of an encoder or decoder may consist of hardware, a combination of hardware and software, or software. Although, codec 705 is depicted as a separate component, codec 705 may be contained within non-volatile memory 712 .
- non-volatile memory 712 can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- Volatile memory 713 includes random access memory (RAM), which acts as external cache memory. According to present aspects, the volatile memory may store the write operation retry logic (not shown in FIG. 7 ) and the like.
- RAM is available in many forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), and enhanced SDRAM (ESDRAM).
- Disk storage 710 includes, but is not limited to, devices like a magnetic disk drive, solid state disk (SSD) floppy disk drive, tape drive, Jaz drive, Zip drive, LS-70 drive, flash memory card, or memory stick.
- disk storage 710 can include storage medium separately or in combination with other storage medium including, but not limited to, an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM).
- CD-ROM compact disk ROM
- CD-R Drive CD recordable drive
- CD-RW Drive CD rewritable drive
- DVD-ROM digital versatile disk ROM drive
- a removable or non-removable interface is typically used, such as interface 716 .
- FIG. 7 describes software that acts as an intermediary between users and the basic computer resources described in the suitable operating environment 700 .
- Such software includes an operating system 718 .
- Operating system 718 which can be stored on disk storage 710 , acts to control and allocate resources of the computer system 702 .
- Applications 720 take advantage of the management of resources by the operating system through program modules 724 , and program data 726 , such as the boot/shutdown transaction table and the like, stored either in system memory 706 or on disk storage 710 . It is to be appreciated that the claimed subject matter can be implemented with various operating systems or combinations of operating systems.
- Input devices 728 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like.
- These and other input devices connect to the processing unit 704 through the system bus 708 via interface port(s) 730 .
- Interface port(s) 730 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB).
- Output device(s) 736 use some of the same type of ports as input device(s) 728 .
- a USB port may be used to provide input to computer 702 , and to output information from computer 702 to an output device 736 .
- Output adapter 734 is provided to illustrate that there are some output devices 736 like monitors, speakers, and printers, among other output devices 736 , which require special adapters.
- the output adapters 734 include, by way of illustration and not limitation, video and sound cards that provide a means of connection between the output device 736 and the system bus 708 . It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer(s) 738 .
- Computer 702 can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 738 .
- the remote computer(s) 738 can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device, a smart phone, a tablet, or other network node, and typically includes many of the elements described relative to computer 702 .
- only a memory storage device 740 is illustrated with remote computer(s) 738 .
- Remote computer(s) 738 is logically connected to computer 702 through a network interface 742 and then connected via communication connection(s) 744 .
- Network interface 742 encompasses wire and/or wireless communication networks such as local-area networks (LAN) and wide-area networks (WAN) and cellular networks.
- LAN technologies include Fiber Distributed Data Interface (FDDI), Copper Distributed Data Interface (CDDI), Ethernet, Token Ring and the like.
- WAN technologies include, but are not limited to, point-to-point links, circuit switching networks like Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL).
- ISDN Integrated Services Digital Networks
- DSL Digital Subscriber Lines
- Communication connection(s) 744 refers to the hardware/software employed to connect the network interface 742 to the bus 708 . While communication connection 744 is shown for illustrative clarity inside computer 702 , it can also be external to computer 702 .
- the hardware/software necessary for connection to the network interface 742 includes, for exemplary purposes only, internal and external technologies such as, modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and wired and wireless Ethernet cards, hubs, and routers.
- the system 800 includes one or more client(s) 802 (e.g., laptops, smart phones, PDAs, media players, computers, portable electronic devices, tablets, and the like).
- the client(s) 802 can be hardware and/or software (e.g., threads, processes, computing devices).
- the system 800 also includes one or more server(s) 804 .
- the server(s) 804 can also be hardware or hardware in combination with software (e.g., threads, processes, computing devices).
- the servers 804 can house threads to perform transformations by employing aspects of this disclosure, for example.
- One possible communication between a client 802 and a server 804 can be in the form of a data packet transmitted between two or more computer processes wherein the data packet may include video data.
- the data packet can include a metadata, such as associated contextual information for example.
- the system 800 includes a communication framework 806 (e.g., a global communication network such as the Internet, or mobile network(s)) that can be employed to facilitate communications between the client(s) 802 and the server(s) 804 .
- a communication framework 806 e.g., a global communication network such as the Internet, or mobile network(s)
- the client(s) 802 include or are operatively connected to one or more client data store(s) 808 that can be employed to store information local to the client(s) 802 (e.g., associated contextual information).
- the server(s) 804 are operatively include or are operatively connected to one or more server data store(s) 810 that can be employed to store information local to the servers 804 .
- a client 802 can transfer an encoded file, in accordance with the disclosed subject matter, to server 804 .
- Server 804 can store the file, decode the file, or transmit the file to another client 802 .
- a client 802 can also transfer uncompressed file to a server 804 and server 804 can compress the file in accordance with the disclosed subject matter.
- server 804 can encode video information and transmit the information via communication framework 806 to one or more clients 802 .
- the illustrated aspects of the disclosure may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network.
- program modules can be located in both local and remote memory storage devices.
- various components described in this description can include electrical circuit(s) that can include components and circuitry elements of suitable value in order to implement the various embodiments.
- many of the various components can be implemented on one or more integrated circuit (IC) chips.
- IC integrated circuit
- a set of components can be implemented in a single IC chip.
- one or more of respective components are fabricated or implemented on separate IC chips.
- the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., a functional equivalent), even though not structurally equivalent to the disclosed structure, which performs the function in the disclosure illustrated exemplary aspects of the claimed subject matter.
- the various embodiments include a system as well as a computer-readable storage medium having computer-executable instructions for performing the acts and/or events of the various methods of the claimed subject matter.
- a component may be, but is not limited to being, a process running on a processor (e.g., digital signal processor), a processor, an object, an executable, a thread of execution, a program, and/or a computer.
- a processor e.g., digital signal processor
- an application running on a controller and the controller can be a component.
- One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
- a “device” can come in the form of specially designed hardware; generalized hardware made specialized by the execution of software thereon that enables the hardware to perform specific function; software stored on a computer readable storage medium; software transmitted on a computer readable transmission medium; or a combination thereof.
- example or “exemplary” are used in this disclosure to mean serving as an example, instance, or illustration. Any aspect or design described in this disclosure as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion.
- the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations.
- Computer-readable storage media can be any available storage media that can be accessed by the computer, is typically of a non-transitory nature, and can include both volatile and nonvolatile media, removable and non-removable media.
- Computer-readable storage media can be implemented in connection with any method or technology for storage of information such as computer-readable instructions, program modules, structured data, or unstructured data.
- Computer-readable storage media can include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible and/or non-transitory media which can be used to store desired information.
- Computer-readable storage media can be accessed by one or more local or remote computing devices, e.g., via access requests, queries or other data retrieval protocols, for a variety of operations with respect to the information stored by the medium.
- communications media typically embody computer-readable instructions, data structures, program modules or other structured or unstructured data in a data signal that can be transitory such as a modulated data signal, e.g., a carrier wave or other transport mechanism, and includes any information delivery or transport media.
- modulated data signal or signals refers to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in one or more signals.
- communication media include wired media, such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Auxiliary Devices For Music (AREA)
- Electrophonic Musical Instruments (AREA)
Abstract
Description
- This application claims priority to U.S. Provisional Application No. 61/848,028, filed Dec. 21, 2012 and entitled “Automatic Algorithmic Composition by Using Correlation between Melody and Lyric”, which is incorporated by reference herein in its entirety.
- This disclosure relates to systems, methods, and algorithms that automatically generate a melodic composition of a song.
- There are many studies that have proposed algorithms for composing the melody of a song automatically, which is known as algorithmic composition. Algorithms (or, at the very least, formal sets of rules) have been used to compose music for centuries. The term is usually reserved for the use of formal procedures to make music without human intervention, either through the introduction of chance procedures or the use of computers. While many studies have been done, various techniques have their respective limitations, and thus an improved algorithmic composition system is desired.
- The following presents a simplified summary of the disclosure in order to provide a basic understanding of some aspects of the disclosure. This summary is not an extensive overview of the disclosure. It is intended to neither identify key or critical elements of the disclosure nor delineate any scope of particular embodiments of the disclosure, or any scope of the claims. Its sole purpose is to present some concepts of the disclosure in a simplified form as a prelude to the more detailed description that is presented later.
- In accordance with one or more embodiments and corresponding disclosure, various non-limiting aspects are described in connection with automatic algorithmic composition. In an embodiment, a method is provided comprising receiving, by a system comprising a processor from a data store, tone data determined from a set of songs represented by a set of notes and a set of song lyrics represented by a set of words, wherein the tone data is selected from the data store based at least on first correlation data that correlates the set of notes to the set of words; determining, by the system, a pattern at least based on a correlation between a subset of the songs represented by a subset of the notes and a subset of the song lyrics represented by a subset of the words; creating, by the system, a composition model based at least on the pattern; generating, by the system, a melody based at least on the composition model; and pairing, by the system, the melody at least to the subset of the song lyrics.
- The method can further comprise analyzing, by the system, respective key signatures comprising respective major scales or respective minor scales of respective songs of the set of songs based at least on respective frequency distributions of respective sets of notes associated with the respective songs of the set of songs. In another aspect, the method can further comprise matching, by the system, respective musical syllable identifiers to letters representing respective notes of the set of notes. In yet another aspect, the method can further comprise assigning, by the system, respective tone data values to respective syllable segments associated with respective words of the set of words based at least on second correlation data that correlates the tone data to the syllable identifiers from the data store.
- The following description and the annexed drawings set forth certain illustrative aspects of the disclosure. These aspects are indicative, however, of but a few of the various ways in which the principles of the disclosure may be employed. Other aspects of the disclosure will become apparent from the following detailed description of the disclosure when considered in conjunction with the drawings.
-
FIG. 1 illustrates a non-limiting example of syllables associated with a word and the tonal stresses associated with respective syllables. -
FIG. 2 illustrates a non-limiting example of a song lyric and a song melody. -
FIG. 3 illustrates an non-limiting example of a system for generating a melody based on the lyric-note correlation between the notes and lyrics of a song. -
FIG. 4A illustrates an example non-limiting probabilistic automaton in connection with generating a song melody. -
FIG. 4B illustrates an example non-limiting tone input data sequence in connection with generating a song melody. -
FIG. 5 illustrates an non-limiting example method for generating a melody in connection with a set of song lyrics and a set of notes. -
FIG. 6 illustrates an non-limiting example method for generating a melody in connection with a set of song lyrics and a set of notes. -
FIG. 7 is a block diagram representing an exemplary non-limiting networked environment in which the various embodiments can be implemented. -
FIG. 8 is a block diagram representing an exemplary non-limiting computing system or operating environment in which the various embodiments may be implemented. - The various embodiments are now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the various embodiments. It may be evident, however, that the various embodiments can be practiced without these specific details. In other instances, well-known structures and components are shown in block diagram form in order to facilitate describing the various embodiments.
- As mentioned in the background, there have been various studies on the topic of algorithmic composition. However, none of the existing approaches take lyrics into consideration for melody composition. Yet, it has been observed that within a song, there usually exists a certain extent of correlation between its melody and its lyrics. Accordingly, various embodiments described herein utilize this type of correlation information for automatic melody composition. When a lyric is present in a song, algorithmic composition can consider not only the temporal correlation among all notes (or sounds) of the melody in the song, but also the lyric-note correlation between the notes and the lyrics in the song. A model is used to take into lyrics of existing songs and incorporate the correlation between song notes and song lyrics to generate a melody. Furthermore, a model is used to consider song patterns, tones, lyrics and songs of different languages to generate such melodies.
- By way of further introduction, this disclosure relates to a method for automatically composing a musical melody by taking into consideration correlations and relationships between a song melody and lyric.
- When a lyric is present in a song, algorithmic composition can thus consider not only the temporal correlation among the notes (or sounds) of the melody in the song but also the lyric-note correlation between the notes and the lyrics in the song. In this regard, the existing approaches to algorithmic composition do not take into account the lyric-note correlation due to the absence of lyrics in such algorithmic composition studies.
- The lyric-note correlation corresponds to the correlation between the changing trend of a sequence of consecutive notes (also referred to as a set of notes) and the changing trend of a sequence of consecutive corresponding song lyrics (also referred to as a set of song lyrics) represented by a sequence of consecutive corresponding words. The changing trend of a sequence of notes corresponds to a series of pitch differences between every two adjacent notes since each note has its pitch (or its frequency). The changing trend of a sequence of words (wherein each word can be segmented into one or more syllable) corresponds to a series of tone differences between every two adjacent syllable since each syllable has its tone. For example, turning now to
FIG. 1 ,FIG. 1 is an illustration of the English word “international”, which has 5 syllables, particularly, “In” illustrated at 102, “ter” at 104, “na” at 106, “tion” at 108, and “al” at 110. In an aspect, each syllable is spoken in one of the three kinds of stresses or tones, namely the primary stress, the secondary stress and the non-stress. The primary stress is a sound associated with utterance of a syllable with a higher frequency, the secondary stress is a sound with a lower frequency and the non-stress is a sound with the lowest frequency. InFIG. 1 , the third syllable (e.g., “na” at 106) corresponds to the primary stress, the first syllable corresponds to the secondary stress (e.g., “in” at 102) and each of the other syllables corresponds to the non-stress (e.g., “ter” at 104, “tion” at 108, or “al” at 110). In music, tones, which are steady periodic sounds often characterized by duration, pitch, intensity and timbre, appear in many languages in the world in addition to English. In Mandarin, there are four or five tones and each word has only one syllable. In Cantonese, there are six tones and each word also has only one syllable. Other languages with tones include That language, Vietnamese, and so on. - In an aspect, the lyric-note correlation can relate to algorithmic composition of a melody according to lyrics expressed in any number of languages. Given a lyric written in a language with different tones, a melody composer called T-Music also referred to as “the system”, can leverage the lyric-note correlation for melody composition. There are two phases in the system. The first phase is a preprocessing phase which first finds lyric-note correlations based on a database or data store that stores numerous existing songs each of which involve both the song's melody and the song's lyric by performing a frequent pattern mining task of the song data stored at the data store. In an aspect, the songs identified via the frequent pattern mining task are identified based on the lyric-note correlations and can be used to, build a Probabilistic Automaton (herein referred to as “PA”). The second phase is a melody composition phase which generates a melody given a lyric by executing the PA generated in the first phase. In various embodiments, the system can access a robust knowledge source for melody composition in that the system utilizes not only an existing song database (stored at the data store), but also utilizes the tone information of the given lyric. Second, the system is highly user-friendly wherein a user who does not have much knowledge about music and does not know how to choose a suitable melody composition algorithm can still generate a melody by using the system. Furthermore, the user can gain a personal and convenient experience by using the system, wherein a melody can often be generated automatically based on a lyric written by the user.
- In an aspect, a song can be accompanied by song lyrics wherein the lyrics are a set of words. A set of song lyrics can be comprised of numerous lyric fragments also referred to as a subset of words (e.g., one or more words in a sequence). As illustrated in
FIG. 1 each respective word can be comprised of various tones and accordingly each syllable of a word is associated with a respective tone (e.g., primary stress, secondary stress, or non-stress). For instance, let T be the total number of tones. In this system, each tone is associated with a tone identifier, also referred to as a tone IDε[1, T]. For example, in the English language, there are three possible tones where 1, 2 and 3 can be used to represent the tone IDs for the primary stress, the secondary stress and the non-stress, respectively. In Mandarin, there are 4 or 5 tones, and in Cantonese, there are 6 tones. - Turning now to
FIG. 2 , illustrated are basic concepts in music theory. At 202, a segment of a melody is illustrated wherein the melody is represented by a sequence of notes, and at 204 a lyric is illustrated which is represented by a sequence of words. An entire song can comprise a set of lyrics and a set of notes, wherein the melody is represented by the set of notes in sequence. Each note is associated with a pitch, wherein the pitch denotes the frequency of the sound that corresponds with the note, and its duration of the sound (e.g., the interval of time of the sound). In an aspect, a note can be characterized by a pitch and duration. - In an aspect, a lyric, illustrated at 204, is defined as a sequence of words and each word is comprised of one or more syllables. Furthermore, in an aspect, each syllable is associated with a tone ID. Thus, each lyric can be represented by a sequence of tone IDs for the lyric. By combining the melody representation and the lyric representation, a song can be represented in the form of a sequence of 2-tuples each in the form of (note, tone ID). The song representation can be referred to as an s-sequence. In an aspect, a specific (note, tone ID)-pair, can be referred to as p.note (e.g., the note element) and as p.tone (e.g., the tone element).
- Turning now to
FIG. 3 , illustrated is a system presenting the architecture of T-Music. Illustrated atFIG. 3 issystem 300 comprising various components including amemory 324 having stored thereon computer executable components, and aprocessor 326 configured to execute computer executable components stored in the memory. In an aspect, asong database 302 stores songs and data associated with such songs. Thesystem 300 is comprised of a Phase I subsystem that employstone extraction component 308, frequentpattern mining component 310,frequent patterns 312, and probabilisticautomaton building component 314. In an aspect,data store 304 stores tone data, data values, tone look-up tables that comprise mapping between the syllable of each word and the tone ID. For each song of a set of songs stored at the song database and each lyric associated with a respective song,system 300 employstone extraction component 308 to extract tone data. Furthermore, in an aspect,tone extraction component 308 identifies the tone sequence and thus the s-sequence for each respective song. In another aspect, frequentpattern mining component 310 determines thefrequent patterns 312 associated with the set of songs based on the identified s-sequences. In an aspect, thefrequent patterns 312 correspond to the lyric-note correlation. In another aspect,system 300 also employs probabilisticautomaton building component 314 that builds a Probabilistic Automaton (PA) based on thefrequent patterns 312. - In another aspect,
system 300 is comprised of a Phase II subsystem, wherein thedata store 304,lyric input component 306,tone extraction component 308,tone sequence component 318, andmelody composition component 320 are components employed by the Phase II subsystem. In an aspect, thememory 324,data store 304, andprocessor 326 are employed by both Phase I and Phase II subsystems. Thelyric input component 306 can store a set of lyrics representing a variety of languages. In an aspect,system 300, viatone extraction component 308 extracts the tone sequence from one or more lyrics received fromlyric input component 306. In another aspect,system 300 employsmelody composition component 320 that generates a melody based on the PA and the extracted tone sequence. - In yet another aspect,
system 300 employs frequentpattern mining component 310 that determines thefrequent patterns 312 associated with the set of songs based on the identified s-sequences. The act of frequent pattern mining can be described using representations. Let D be the set of s-sequences corresponding to the songs stored at thesong database component 302. Let S be a s-sequence. The length of S, is denoted by |S|, to be the number of (note, tone ID)-pairs in S. In an aspect, S[i, j] represents the s-sequence comprising (note, tone ID)-pairs which occur between the ith position and the jth position in S. For example, S[1,m] corresponds to S itself, where m is the length of S. Given two s-sequences S=((n1, t1), . . . , (nm, tm)) and S′=((n′1, t′1), . . . , (n′m′, Cm′)), the concatenation between S and S′, is denoted by S⋄S′, which is defined as the s-sequence of ((n1, t1), . . . , (nm, tm), (n′1, t′1), . . . , (n′m′, t′m′)). In an aspect, S′ is referred to a sub-string of S if there exists an integer i such that S[i, i+m′−1] is exactly S′, where m′ is the length of S′. It is defined that a support of a s-sequence S wrt D to the number of s-sequences in D that have S as its sub-string. Given a threshold δ, the frequentpattern mining component 310 identifies s-sequences S with its support wrt D at least δ. An algorithm is adopted for finding frequent sub-sequence/substring mining. For each frequent s-sequence S, its support is maintained, denoted by S.T. - Turning now to
FIG. 4 , illustrated is another aspect of system 300 wherein system 300 employs probabilistic automaton building component 314 that builds a Probabilistic Automaton (PA) based on the frequent patterns 312. In an aspect, Probabilistic Automaton (PA) is a generalization of Non-deterministic Finite Automaton (NFA). NFA is designed for lexical analysis in automata theory. Formally, NFA can be represented by a 5-tuple (Q, , Δ, q0, F), where (1) Q is a finite set of states, (2) is a set of input symbols, (3) Δ is a transition relation Q×→P(Q), where P(Q) denotes the power set of Q, (4) q0 is the initial state and (5) FQ is the set of final (accepting) states. PA generalizes NFA in a way such that the transitions in PA happen with probabilities. Besides, the initial state q0 in NFA, which is deterministic, is replaced in PA with a probability vector v each of which entries corresponds to the probability that the initial state is equal to a state in Q. Thus, we represent a PA with a 5-tuple (Q, , Δ, v, F), where Q, and F have the same meanings as their counterparts in an NFA, and each transition in Δ is associated with a probability. - Let T be the sequence of tone IDs extracted from the received lyric. An example of the sequence (called the tone sequence) can be (2, 1, 3, 5) (Illustrated at the first row 420 in
FIG. 4(B) ). In the following, the probabilistic automaton building act performed by probabilistic automaton building component 314 is described wherein a PA is constructed that is represented by (Q, , Δ, v, F). In an aspect, Q is constructed to be the set containing s-sequences S that satisfy the following two conditions: (a) S has its length equal to l, where l is a user given parameter and (b) S′εD such that S is a sub-string of S′. In another aspect, is constructed to be the set containing tone IDs. In another aspect, Δ is constructed as follows: Δ is initially to be . Then, for each pair of a state qεQ and a symbol tε, the following two steps are performed. First, a set of states are found, denoted by Qq,t, such that each state q′ in Qq,t satisfies the following: (1) q′[1:1−1] is exactly the same as q[2:1] and (2) q′ [1].tone is exactly the same as t. - Second, for each state q′εQq,t, created in Δ is a transition from q to q′ with the input of t and set its probability to be q′.T/ q″εQq,tq″.T. In an aspect, for each state qεQ, The probability that the initial state is q is set to be q. T/ qεQq.T. In yet another aspect, F is constructed as . This is because the termination of the execution on the PA in the melody composition is not indicated by the final states. Instead, it terminates after tone IDs in T have been inputted, where T is the sequence of tones extracted from the input lyric.
- Turning now to
FIG. 4(A) presented is an instance of a PA. In the figure, omitted is the duration for simplicity. There are 5 states, q1, q2, q3, q4, q5, each represented by a box. The number next to each state is the support of its corresponding s-sequence, e.g., q1.T=5. The arrow from a state to another means a transition and the number along the arrow is the input symbol in corresponding to the transition. Besides, the number within the parentheses is the probability associated with the corresponding transition. In an aspect,system 300 generates a melody viamelody composition component 320. In an aspect,melody composition component 320 generates a melody by executing the PA constructed by the probabilisticautomaton building component 314 with the input of the tone sequence extracted from the input lyric, i.e., T. Specifically, let (q1, q2, . . . , qn) be the sequence of resulting states when executing the PA with T as the input. Then, the melody generated bysystem 300, which is a sequence of notes, is represented by (q1[1].note, q1[2].note, . . . , q1[l].note)⋄(q2[l].note)⋄(q3[l].note) . . . , ⋄(qn[l].note). Note that qi[2:1] is exactly the same as qi+1[1:1−1] since there exists a transition from qi to qi+1 in Δ for 1≦i≦n−1. - Specifically, during the execution process on the PA, the following scenario might occur. There exist no transitions from the current state, says q, to other states with the current input tone ID, says t, i.e., Δ(q, t) is an . Thus, in this case, the execution process cannot proceed. To fix this issue, in
system 300, select the state q′ in Q such that (1) q′[1:1−1] is the most similar to q[2:1], (2) q′[l].tone is exactly the same as t and (3) Δ(q′, t) is non-empty. The similarity measurement adopted insystem 300 is the common edit distance measurement between two strings. In an aspect,melody composition component 320 executes the PA as illustrated inFIG. 4(A) with the input of the tone sequence as shown inFIG. 4(B) . Suppose it chooses state q1 as the initial state. After that, the current state is q1 and the current input symbol is 3 (tone IDs - In an aspect, some advanced concepts related to music theory were considered for melody
composition using system 300. For instance, the harmony rule, rhythm, coherence, and vocal range concepts were considered with respect tosystem 300. Two examples of harmony rules are the chord progression and the cadence. Each song can be broken down into phases. We can regard a phase as a sentence in a language. In music theory, each phase ends with a cadence. A cadence is a certain kind of patterns which describe the ending of a phase. It is just like a full-stop or a comma in English. According to the concept of cadence, the last few notes at the end of each phase must come from some particular notes. In an aspect,system 300 can generate notes at the end of each phase according to this cadence principle. In particular, when notes are generated at the end of a phase, the notes related to the cadence are considered instead of all possible notes. - Regarding rhythm, rhythm can be used for generating the melody. For example, the last note of a phase should be longer. The rhythm of a phase is similar to the rhythm of some of the other phases. With respect to coherence, in a song, one part in the melody is usually similar to the other part so that the song has a coherence effect. In an aspect,
system 300 can also incorporate this concept. Specifically, whenever another phase for the melody is generated, it is investigated as to whether some portions of the melody generated previously can be used to generate the new portions of the melody to be composed automatically. If yes, some existing portions of the melody are used for the new portions. The criterion requires investigation as to whether each existing portion of the melody together with the portion of the lyric can be found in the frequent patterns mined inPhase 1. Regarding vocal range, some vocal ranges, such as those of a human, are considered bounded (e.g., at most two octaves). The vocal range is the measure of the breadth of pitches that a human voice can sing. Based on the vocal range,system 300 can restrict the possible choices of notes to be generated whenever it executes the PA. - Turning now to
FIGS. 5 and 6 , illustrated are methodologies or flow diagrams in accordance with certain aspects of this disclosure. While, for purposes of simplicity of explanation, the disclosed methods are shown and described as a series of acts, the disclosed subject matter is not limited by the order of acts, as some acts may occur in different orders and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology can alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all illustrated acts may be required to implement a method in accordance with the disclosed subject matter. Additionally, it is to be appreciated that the methodologies disclosed in this disclosure are capable of being stored on an article of manufacture to facilitate transporting and transferring such methodologies to computers or other computing devices. - Referring now to
FIG. 5 , presented is a flow diagram of an example application of systems disclosed in this description in accordance with an embodiment. In an aspect,exemplary method 500 of the disclosed systems is stored in a memory and utilizes a processor to execute computer executable instructions to perform functions. At 502, tone data is received, by a system comprising a processor from a data store, wherein the tone data is deter mined from a set of songs represented by a set of notes and a set of song lyrics represented by a set of words, wherein the tone data is selected from the data store based at least on first correlation data that correlates the set of notes to the set of words. At 504, the system analyzes, respective key signatures comprising respective major scales or respective minor scales of respective songs of the set of songs based at least on respective frequency distributions of respective sets of notes associated with the respective songs of the set of songs. At 506, the system matches respective musical syllable identifiers to letters representing respective nots of the set of notes. At 508, the system assigns respective tone data values to respective syllable segments associated with respective words of the set of words based at least on second correlation data that correlates the tone data to the syllable identifiers from the data store. At 510, a pattern is determined by the system, wherein the pattern is at least based on a correlation between a subset of the songs represented by a subset of the notes and a subset of the song lyrics represented by a subset of the words. At 512, a composition model based at least on the pattern is created by the system. In an aspect, the pattern is a sequence of two-tuples, wherein a first tuple element is a note comprising a pitch and duration, a second tuple element is a tone identifier, and the sequence of two-tuples is represented as an association of the note and the note identifier. At 514, a melody based at least on the composition model is generated by the system. At 516, the system pairs the melody at least to the subset of the song lyrics. In an aspect, the pairing comprises pairing the melody to the set of song lyrics. - Referring now to
FIG. 6 , presented is a flow diagram of an example application of systems disclosed in this description in accordance with an embodiment. In an aspect,exemplary method 600 of the disclosed systems is stored in a memory and utilizes a processor to execute computer executable instructions to perform functions. At 602, the system, comprising a processor, receives from a data store, the subset of the notes, wherein the subset of the notes represents a major scale or a minor scale. At 604, the system extracts tone data associated with the subset of the words and the subset of notes. At 606, the system, maps the tone data to the melody based on the first pattern or the second pattern, where the first pattern is a pattern based on a song composition in a major scale and the second pattern is a pattern based on a song composition in a minor scale. At 608, the system selects a value of the tone data value that is most frequently occurring with regard to respective syllable segments associated with respective words of the subset of the words. At 610, a melody based at least on the composition model is generated by the system. In an aspect, the composition model is a probabilistic model based on at least one of the pattern, the first pattern, or the second pattern. At 612, the system pairs the melody at least to the subset of the song lyrics. In an aspect, the pairing comprises pairing the melody to the set of song lyrics. - In view of the exemplary systems described above, methodologies that may be implemented in accordance with the described subject matter will be better appreciated with reference to the flowcharts of the various figures. While for purposes of simplicity of explanation, the methodologies are shown and described as a series of blocks, it is to be understood and appreciated that the claimed subject matter is not limited by the order of the blocks, as some blocks may occur in different orders and/or concurrently with other blocks from what is depicted and described in this disclosure. Where non-sequential, or branched, flow is illustrated via flowchart, it can be appreciated that various other branches, flow paths, and orders of the blocks, may be implemented which achieve the same or a similar result. Moreover, not all illustrated blocks may be required to implement the methodologies described hereinafter.
- In addition to the various embodiments described in this disclosure, it is to be understood that other similar embodiments can be used or modifications and additions can be made to the described embodiment(s) for performing the same or equivalent function of the corresponding embodiment(s) without deviating there from. Still further, multiple processing chips or multiple devices can share the performance of one or more functions described in this disclosure, and similarly, storage can be effected across a plurality of devices. Accordingly, the invention is not to be limited to any single embodiment, but rather can be construed in breadth, spirit and scope in accordance with the appended claims.
- The systems and processes described below can be embodied within hardware, such as a single integrated circuit (IC) chip, multiple ICs, an application specific integrated circuit (ASIC), or the like. Further, the order in which some or all of the process blocks appear in each process should not be deemed limiting. Rather, it should be understood that some of the process blocks can be executed in a variety of orders, not all of which may be explicitly illustrated in this disclosure.
- With reference to
FIG. 7 , asuitable environment 700 for implementing various aspects of the claimed subject matter includes acomputer 702. Thecomputer 702 includes aprocessing unit 704, asystem memory 706, acodec 705, and asystem bus 708. Thesystem bus 708 couples system components including, but not limited to, thesystem memory 706 to theprocessing unit 704. Theprocessing unit 704 can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as theprocessing unit 704. - The
system bus 708 can be any of several types of bus structure(s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MSA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Card Bus, Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), Firewire (IEEE 1394), and Small Computer Systems Interface (SCSI). - The
system memory 706 includesvolatile memory 713 andnon-volatile memory 712. The basic input/output system (BIOS), containing the basic routines to transfer information between elements within thecomputer 702, such as during start-up, is stored innon-volatile memory 712. In addition, according to various embodiments,codec 705 may include at least one of an encoder or decoder, wherein the at least one of an encoder or decoder may consist of hardware, a combination of hardware and software, or software. Although,codec 705 is depicted as a separate component,codec 705 may be contained withinnon-volatile memory 712. By way of illustration, and not limitation,non-volatile memory 712 can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.Volatile memory 713 includes random access memory (RAM), which acts as external cache memory. According to present aspects, the volatile memory may store the write operation retry logic (not shown inFIG. 7 ) and the like. By way of illustration and not limitation, RAM is available in many forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), and enhanced SDRAM (ESDRAM). -
Computer 702 may also include removable/non-removable, volatile/non-volatile computer storage medium.FIG. 7 illustrates, for example,disk storage 710.Disk storage 710 includes, but is not limited to, devices like a magnetic disk drive, solid state disk (SSD) floppy disk drive, tape drive, Jaz drive, Zip drive, LS-70 drive, flash memory card, or memory stick. In addition,disk storage 710 can include storage medium separately or in combination with other storage medium including, but not limited to, an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM). To facilitate connection of thedisk storage devices 710 to thesystem bus 708, a removable or non-removable interface is typically used, such asinterface 716. - It is to be appreciated that
FIG. 7 describes software that acts as an intermediary between users and the basic computer resources described in thesuitable operating environment 700. Such software includes anoperating system 718.Operating system 718, which can be stored ondisk storage 710, acts to control and allocate resources of thecomputer system 702.Applications 720 take advantage of the management of resources by the operating system throughprogram modules 724, andprogram data 726, such as the boot/shutdown transaction table and the like, stored either insystem memory 706 or ondisk storage 710. It is to be appreciated that the claimed subject matter can be implemented with various operating systems or combinations of operating systems. - A user enters commands or information into the
computer 702 through input device(s) 728.Input devices 728 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like. These and other input devices connect to theprocessing unit 704 through thesystem bus 708 via interface port(s) 730. Interface port(s) 730 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB). Output device(s) 736 use some of the same type of ports as input device(s) 728. Thus, for example, a USB port may be used to provide input tocomputer 702, and to output information fromcomputer 702 to anoutput device 736.Output adapter 734 is provided to illustrate that there are someoutput devices 736 like monitors, speakers, and printers, amongother output devices 736, which require special adapters. Theoutput adapters 734 include, by way of illustration and not limitation, video and sound cards that provide a means of connection between theoutput device 736 and thesystem bus 708. It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer(s) 738. -
Computer 702 can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 738. The remote computer(s) 738 can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device, a smart phone, a tablet, or other network node, and typically includes many of the elements described relative tocomputer 702. For purposes of brevity, only amemory storage device 740 is illustrated with remote computer(s) 738. Remote computer(s) 738 is logically connected tocomputer 702 through anetwork interface 742 and then connected via communication connection(s) 744.Network interface 742 encompasses wire and/or wireless communication networks such as local-area networks (LAN) and wide-area networks (WAN) and cellular networks. LAN technologies include Fiber Distributed Data Interface (FDDI), Copper Distributed Data Interface (CDDI), Ethernet, Token Ring and the like. WAN technologies include, but are not limited to, point-to-point links, circuit switching networks like Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL). - Communication connection(s) 744 refers to the hardware/software employed to connect the
network interface 742 to thebus 708. Whilecommunication connection 744 is shown for illustrative clarity insidecomputer 702, it can also be external tocomputer 702. The hardware/software necessary for connection to thenetwork interface 742 includes, for exemplary purposes only, internal and external technologies such as, modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and wired and wireless Ethernet cards, hubs, and routers. - Referring now to
FIG. 8 , there is illustrated a schematic block diagram of acomputing environment 800 in accordance with this disclosure. Thesystem 800 includes one or more client(s) 802 (e.g., laptops, smart phones, PDAs, media players, computers, portable electronic devices, tablets, and the like). The client(s) 802 can be hardware and/or software (e.g., threads, processes, computing devices). Thesystem 800 also includes one or more server(s) 804. The server(s) 804 can also be hardware or hardware in combination with software (e.g., threads, processes, computing devices). Theservers 804 can house threads to perform transformations by employing aspects of this disclosure, for example. One possible communication between aclient 802 and aserver 804 can be in the form of a data packet transmitted between two or more computer processes wherein the data packet may include video data. The data packet can include a metadata, such as associated contextual information for example. Thesystem 800 includes a communication framework 806 (e.g., a global communication network such as the Internet, or mobile network(s)) that can be employed to facilitate communications between the client(s) 802 and the server(s) 804. - Communications can be facilitated via a wired (including optical fiber) and/or wireless technology. The client(s) 802 include or are operatively connected to one or more client data store(s) 808 that can be employed to store information local to the client(s) 802 (e.g., associated contextual information). Similarly, the server(s) 804 are operatively include or are operatively connected to one or more server data store(s) 810 that can be employed to store information local to the
servers 804. - In one embodiment, a
client 802 can transfer an encoded file, in accordance with the disclosed subject matter, toserver 804.Server 804 can store the file, decode the file, or transmit the file to anotherclient 802. It is to be appreciated, that aclient 802 can also transfer uncompressed file to aserver 804 andserver 804 can compress the file in accordance with the disclosed subject matter. Likewise,server 804 can encode video information and transmit the information viacommunication framework 806 to one ormore clients 802. - The illustrated aspects of the disclosure may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote memory storage devices.
- Moreover, it is to be appreciated that various components described in this description can include electrical circuit(s) that can include components and circuitry elements of suitable value in order to implement the various embodiments. Furthermore, it can be appreciated that many of the various components can be implemented on one or more integrated circuit (IC) chips. For example, in one embodiment, a set of components can be implemented in a single IC chip. In other embodiments, one or more of respective components are fabricated or implemented on separate IC chips.
- What has been described above includes examples of the embodiments of the present invention. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the claimed subject matter, but it is to be appreciated that many further combinations and permutations of the various embodiments are possible. Accordingly, the claimed subject matter is intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims. Moreover, the above description of illustrated embodiments of the subject disclosure, including what is described in the Abstract, is not intended to be exhaustive or to limit the disclosed embodiments to the precise forms disclosed. While specific embodiments and examples are described in this disclosure for illustrative purposes, various modifications are possible that are considered within the scope of such embodiments and examples, as those skilled in the relevant art can recognize.
- In particular and in regard to the various functions performed by the above described components, devices, circuits, systems and the like, the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., a functional equivalent), even though not structurally equivalent to the disclosed structure, which performs the function in the disclosure illustrated exemplary aspects of the claimed subject matter. In this regard, it will also be recognized that the various embodiments include a system as well as a computer-readable storage medium having computer-executable instructions for performing the acts and/or events of the various methods of the claimed subject matter.
- The aforementioned systems/circuits/modules have been described with respect to interaction between several components/blocks. It can be appreciated that such systems/circuits and components/blocks can include those components or specified sub-components, some of the specified components or sub-components, and/or additional components, and according to various permutations and combinations of the foregoing. Sub-components can also be implemented as components communicatively coupled to other components rather than included within parent components (hierarchical). Additionally, it should be noted that one or more components may be combined into a single component providing aggregate functionality or divided into several separate sub-components, and any one or more middle layers, such as a management layer, may be provided to communicatively couple to such sub-components in order to provide integrated functionality. Any components described in this disclosure may also interact with one or more other components not specifically described in this disclosure but known by those of skill in the art.
- In addition, while a particular feature of the various embodiments may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms “includes,” “including,” “has,” “contains,” variants thereof, and other similar words are used in either the detailed description or the claims, these terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
- As used in this application, the terms “component,” “module,” “system,” or the like are generally intended to refer to a computer-related entity, either hardware (e.g., a circuit), a combination of hardware and software, software, or an entity related to an operational machine with one or more specific functionalities. For example, a component may be, but is not limited to being, a process running on a processor (e.g., digital signal processor), a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. Further, a “device” can come in the form of specially designed hardware; generalized hardware made specialized by the execution of software thereon that enables the hardware to perform specific function; software stored on a computer readable storage medium; software transmitted on a computer readable transmission medium; or a combination thereof.
- Moreover, the words “example” or “exemplary” are used in this disclosure to mean serving as an example, instance, or illustration. Any aspect or design described in this disclosure as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form.
- Computing devices typically include a variety of media, which can include computer-readable storage media and/or communications media, in which these two terms are used in this description differently from one another as follows. Computer-readable storage media can be any available storage media that can be accessed by the computer, is typically of a non-transitory nature, and can include both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable storage media can be implemented in connection with any method or technology for storage of information such as computer-readable instructions, program modules, structured data, or unstructured data. Computer-readable storage media can include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible and/or non-transitory media which can be used to store desired information. Computer-readable storage media can be accessed by one or more local or remote computing devices, e.g., via access requests, queries or other data retrieval protocols, for a variety of operations with respect to the information stored by the medium.
- On the other hand, communications media typically embody computer-readable instructions, data structures, program modules or other structured or unstructured data in a data signal that can be transitory such as a modulated data signal, e.g., a carrier wave or other transport mechanism, and includes any information delivery or transport media. The term “modulated data signal” or signals refers to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in one or more signals. By way of example, and not limitation, communication media include wired media, such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.
- In view of the exemplary systems described above, methodologies that may be implemented in accordance with the described subject matter will be better appreciated with reference to the flowcharts of the various figures. For simplicity of explanation, the methodologies are depicted and described as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described in this disclosure. Furthermore, not all illustrated acts may be required to implement the methodologies in accordance with certain aspects of this disclosure. In addition, those skilled in the art will understand and appreciate that the methodologies could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methodologies disclosed in this disclosure are capable of being stored on an article of manufacture to facilitate transporting and transferring such methodologies to computing devices. The term article of manufacture, as used in this disclosure, is intended to encompass a computer program accessible from any computer-readable device or storage media.
Claims (22)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/095,019 US9620092B2 (en) | 2012-12-21 | 2013-12-03 | Composition using correlation between melody and lyrics |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261848028P | 2012-12-21 | 2012-12-21 | |
US14/095,019 US9620092B2 (en) | 2012-12-21 | 2013-12-03 | Composition using correlation between melody and lyrics |
Publications (2)
Publication Number | Publication Date |
---|---|
US20140174279A1 true US20140174279A1 (en) | 2014-06-26 |
US9620092B2 US9620092B2 (en) | 2017-04-11 |
Family
ID=50973162
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/095,019 Active US9620092B2 (en) | 2012-12-21 | 2013-12-03 | Composition using correlation between melody and lyrics |
Country Status (2)
Country | Link |
---|---|
US (1) | US9620092B2 (en) |
CN (1) | CN103902642B (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130295533A1 (en) * | 2012-05-03 | 2013-11-07 | Lyrics2Learn, Llc | Method and System for Educational Linking of Lyrical Phrases and Musical Structure |
US20140260912A1 (en) * | 2013-03-14 | 2014-09-18 | Yamaha Corporation | Sound signal analysis apparatus, sound signal analysis method and sound signal analysis program |
US20140260913A1 (en) * | 2013-03-15 | 2014-09-18 | Exomens Ltd. | System and method for analysis and creation of music |
CN104391980A (en) * | 2014-12-08 | 2015-03-04 | 百度在线网络技术(北京)有限公司 | Song generating method and device |
US9087501B2 (en) | 2013-03-14 | 2015-07-21 | Yamaha Corporation | Sound signal analysis apparatus, sound signal analysis method and sound signal analysis program |
US9263013B2 (en) * | 2014-04-30 | 2016-02-16 | Skiptune, LLC | Systems and methods for analyzing melodies |
CN105893460A (en) * | 2016-03-22 | 2016-08-24 | 上海班砖网络科技有限公司 | Automatic music composing method and device based on artificial intelligence technology |
US20170263228A1 (en) * | 2015-09-29 | 2017-09-14 | Amper Music, Inc. | Automated music composition system and method driven by lyrics and emotion and style type musical experience descriptors |
US10043502B1 (en) * | 2017-07-18 | 2018-08-07 | Vertical Craft, LLC | Music composition tools on a single pane-of-glass |
US10062366B2 (en) * | 2014-10-20 | 2018-08-28 | Saronikos Trading And Services, Unipessoal Lda | Ringtone sequences based on music harmony, modulation symbols and calling telephone number |
WO2018200267A1 (en) * | 2017-04-26 | 2018-11-01 | Microsoft Technology Licensing, Llc | Automatic song generation |
CN109448697A (en) * | 2018-10-08 | 2019-03-08 | 平安科技(深圳)有限公司 | Poem melody generation method, electronic device and computer readable storage medium |
CN109741724A (en) * | 2018-12-27 | 2019-05-10 | 歌尔股份有限公司 | Make the method, apparatus and intelligent sound of song |
US10311843B2 (en) * | 2017-07-18 | 2019-06-04 | Vertical Craft | Music composition tools on a single pane-of-glass |
US20190279607A1 (en) * | 2017-07-18 | 2019-09-12 | Vertical Craft, LLC | Music composition tools on a single pane-of-glass |
CN111465979A (en) * | 2018-10-19 | 2020-07-28 | 索尼公司 | Information processing method, information processing apparatus, and information processing program |
US10854180B2 (en) | 2015-09-29 | 2020-12-01 | Amper Music, Inc. | Method of and system for controlling the qualities of musical energy embodied in and expressed by digital music to be automatically composed and generated by an automated music composition and generation engine |
CN112185321A (en) * | 2019-06-14 | 2021-01-05 | 微软技术许可有限责任公司 | Song generation |
CN112309435A (en) * | 2020-10-30 | 2021-02-02 | 北京有竹居网络技术有限公司 | Method and device for generating main melody, electronic equipment and storage medium |
US10964299B1 (en) | 2019-10-15 | 2021-03-30 | Shutterstock, Inc. | Method of and system for automatically generating digital performances of music compositions using notes selected from virtual musical instruments based on the music-theoretic states of the music compositions |
US11024275B2 (en) | 2019-10-15 | 2021-06-01 | Shutterstock, Inc. | Method of digitally performing a music composition using virtual musical instruments having performance logic executing within a virtual musical instrument (VMI) library management system |
CN112951187A (en) * | 2021-03-24 | 2021-06-11 | 平安科技(深圳)有限公司 | Van-music generation method, device, equipment and storage medium |
US11037538B2 (en) | 2019-10-15 | 2021-06-15 | Shutterstock, Inc. | Method of and system for automated musical arrangement and musical instrument performance style transformation supported within an automated music performance system |
CN113539216A (en) * | 2021-06-29 | 2021-10-22 | 广州酷狗计算机科技有限公司 | Melody creation navigation method and device, equipment, medium and product thereof |
US11948542B2 (en) * | 2020-01-31 | 2024-04-02 | Obeebo Labs Ltd. | Systems, devices, and methods for computer-generated musical note sequences |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105070283B (en) * | 2015-08-27 | 2019-07-09 | 百度在线网络技术(北京)有限公司 | The method and apparatus dubbed in background music for singing voice |
CN106547789B (en) * | 2015-09-22 | 2021-02-05 | 阿里巴巴集团控股有限公司 | Lyric generation method and device |
CN106653037B (en) * | 2015-11-03 | 2020-02-14 | 广州酷狗计算机科技有限公司 | Audio data processing method and device |
CN105513607B (en) * | 2015-11-25 | 2019-05-17 | 网易传媒科技(北京)有限公司 | A kind of method and apparatus write words of setting a song to music |
CN107123415B (en) * | 2017-05-04 | 2020-12-18 | 吴振国 | Automatic song editing method and system |
CN107122493B (en) * | 2017-05-19 | 2020-04-28 | 北京金山安全软件有限公司 | Song playing method and device |
KR101942814B1 (en) * | 2017-08-10 | 2019-01-29 | 주식회사 쿨잼컴퍼니 | Method for providing accompaniment based on user humming melody and apparatus for the same |
CN108831423B (en) * | 2018-05-30 | 2023-06-06 | 腾讯音乐娱乐科技(深圳)有限公司 | Method, device, terminal and storage medium for extracting main melody tracks from audio data |
CN109036355B (en) * | 2018-06-29 | 2023-04-25 | 平安科技(深圳)有限公司 | Automatic composing method, device, computer equipment and storage medium |
CN109166564B (en) * | 2018-07-19 | 2023-06-06 | 平安科技(深圳)有限公司 | Method, apparatus and computer readable storage medium for generating a musical composition for a lyric text |
CN109189974A (en) * | 2018-08-08 | 2019-01-11 | 平安科技(深圳)有限公司 | A kind of method for building up, system, equipment and the storage medium of model of wrirting music |
TWI713958B (en) * | 2018-12-22 | 2020-12-21 | 淇譽電子科技股份有限公司 | Automated songwriting generation system and method thereof |
CN112309353B (en) * | 2020-10-30 | 2024-07-23 | 北京有竹居网络技术有限公司 | Composition method, composition device, electronic equipment and storage medium |
CN113035161B (en) * | 2021-03-17 | 2024-08-20 | 平安科技(深圳)有限公司 | Song melody generation method, device and equipment based on chord and storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5517892A (en) * | 1992-12-09 | 1996-05-21 | Yamaha Corporation | Electonic musical instrument having memory for storing tone waveform and its file name |
US5736663A (en) * | 1995-08-07 | 1998-04-07 | Yamaha Corporation | Method and device for automatic music composition employing music template information |
US6075193A (en) * | 1997-10-14 | 2000-06-13 | Yamaha Corporation | Automatic music composing apparatus and computer readable medium containing program therefor |
US6191349B1 (en) * | 1998-12-29 | 2001-02-20 | International Business Machines Corporation | Musical instrument digital interface with speech capability |
US20010037721A1 (en) * | 2000-04-28 | 2001-11-08 | Yamaha Corporation | Apparatus and method for creating content comprising a combination of text data and music data |
US6689946B2 (en) * | 2000-04-25 | 2004-02-10 | Yamaha Corporation | Aid for composing words of song |
US20040025671A1 (en) * | 2000-11-17 | 2004-02-12 | Mack Allan John | Automated music arranger |
US20050076772A1 (en) * | 2003-10-10 | 2005-04-14 | Gartland-Jones Andrew Price | Music composing system |
US20090217805A1 (en) * | 2005-12-21 | 2009-09-03 | Lg Electronics Inc. | Music generating device and operating method thereof |
US7792782B2 (en) * | 2005-05-02 | 2010-09-07 | Silentmusicband Corp. | Internet music composition application with pattern-combination method |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5281754A (en) | 1992-04-13 | 1994-01-25 | International Business Machines Corporation | Melody composer and arranger |
AU3734195A (en) | 1994-09-29 | 1996-04-19 | Apple Computer, Inc. | A system and method for determining the tone of a syllable of mandarin chinese speech |
AU6395800A (en) | 1999-08-02 | 2001-02-19 | Dynamix Direct, Inc. | Online composition and playback of audio content |
JP4277697B2 (en) * | 2004-01-23 | 2009-06-10 | ヤマハ株式会社 | SINGING VOICE GENERATION DEVICE, ITS PROGRAM, AND PORTABLE COMMUNICATION TERMINAL HAVING SINGING VOICE GENERATION FUNCTION |
KR20070059253A (en) | 2005-12-06 | 2007-06-12 | 최종민 | How to translate language into symbolic melody |
SE0600243L (en) | 2006-02-06 | 2007-02-27 | Mats Hillborg | melody Generator |
US7705231B2 (en) * | 2007-09-07 | 2010-04-27 | Microsoft Corporation | Automatic accompaniment for vocal melodies |
US7696426B2 (en) | 2006-12-19 | 2010-04-13 | Recombinant Inc. | Recombinant music composition algorithm and method of using the same |
US20090048837A1 (en) | 2007-08-14 | 2009-02-19 | Ling Ju Su | Phonetic tone mark system and method thereof |
WO2009107137A1 (en) | 2008-02-28 | 2009-09-03 | Technion Research & Development Foundation Ltd. | Interactive music composition method and apparatus |
-
2013
- 2013-12-03 US US14/095,019 patent/US9620092B2/en active Active
- 2013-12-20 CN CN201310712131.8A patent/CN103902642B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5517892A (en) * | 1992-12-09 | 1996-05-21 | Yamaha Corporation | Electonic musical instrument having memory for storing tone waveform and its file name |
US5736663A (en) * | 1995-08-07 | 1998-04-07 | Yamaha Corporation | Method and device for automatic music composition employing music template information |
US6075193A (en) * | 1997-10-14 | 2000-06-13 | Yamaha Corporation | Automatic music composing apparatus and computer readable medium containing program therefor |
US6191349B1 (en) * | 1998-12-29 | 2001-02-20 | International Business Machines Corporation | Musical instrument digital interface with speech capability |
US6689946B2 (en) * | 2000-04-25 | 2004-02-10 | Yamaha Corporation | Aid for composing words of song |
US20010037721A1 (en) * | 2000-04-28 | 2001-11-08 | Yamaha Corporation | Apparatus and method for creating content comprising a combination of text data and music data |
US20040025671A1 (en) * | 2000-11-17 | 2004-02-12 | Mack Allan John | Automated music arranger |
US20050076772A1 (en) * | 2003-10-10 | 2005-04-14 | Gartland-Jones Andrew Price | Music composing system |
US7792782B2 (en) * | 2005-05-02 | 2010-09-07 | Silentmusicband Corp. | Internet music composition application with pattern-combination method |
US20090217805A1 (en) * | 2005-12-21 | 2009-09-03 | Lg Electronics Inc. | Music generating device and operating method thereof |
Cited By (56)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130295533A1 (en) * | 2012-05-03 | 2013-11-07 | Lyrics2Learn, Llc | Method and System for Educational Linking of Lyrical Phrases and Musical Structure |
US20140260912A1 (en) * | 2013-03-14 | 2014-09-18 | Yamaha Corporation | Sound signal analysis apparatus, sound signal analysis method and sound signal analysis program |
US9087501B2 (en) | 2013-03-14 | 2015-07-21 | Yamaha Corporation | Sound signal analysis apparatus, sound signal analysis method and sound signal analysis program |
US9171532B2 (en) * | 2013-03-14 | 2015-10-27 | Yamaha Corporation | Sound signal analysis apparatus, sound signal analysis method and sound signal analysis program |
US20140260913A1 (en) * | 2013-03-15 | 2014-09-18 | Exomens Ltd. | System and method for analysis and creation of music |
US9183821B2 (en) * | 2013-03-15 | 2015-11-10 | Exomens | System and method for analysis and creation of music |
US9263013B2 (en) * | 2014-04-30 | 2016-02-16 | Skiptune, LLC | Systems and methods for analyzing melodies |
US20160098978A1 (en) * | 2014-04-30 | 2016-04-07 | Skiptune, LLC | Systems and methods for analyzing melodies |
US9454948B2 (en) * | 2014-04-30 | 2016-09-27 | Skiptune, LLC | Systems and methods for analyzing melodies |
US10062366B2 (en) * | 2014-10-20 | 2018-08-28 | Saronikos Trading And Services, Unipessoal Lda | Ringtone sequences based on music harmony, modulation symbols and calling telephone number |
CN104391980A (en) * | 2014-12-08 | 2015-03-04 | 百度在线网络技术(北京)有限公司 | Song generating method and device |
US11037541B2 (en) * | 2015-09-29 | 2021-06-15 | Shutterstock, Inc. | Method of composing a piece of digital music using musical experience descriptors to indicate what, when and how musical events should appear in the piece of digital music automatically composed and generated by an automated music composition and generation system |
US11430419B2 (en) | 2015-09-29 | 2022-08-30 | Shutterstock, Inc. | Automatically managing the musical tastes and preferences of a population of users requesting digital pieces of music automatically composed and generated by an automated music composition and generation system |
US12039959B2 (en) | 2015-09-29 | 2024-07-16 | Shutterstock, Inc. | Automated music composition and generation system employing virtual musical instrument libraries for producing notes contained in the digital pieces of automatically composed music |
US20170263228A1 (en) * | 2015-09-29 | 2017-09-14 | Amper Music, Inc. | Automated music composition system and method driven by lyrics and emotion and style type musical experience descriptors |
US11776518B2 (en) | 2015-09-29 | 2023-10-03 | Shutterstock, Inc. | Automated music composition and generation system employing virtual musical instrument libraries for producing notes contained in the digital pieces of automatically composed music |
US10163429B2 (en) * | 2015-09-29 | 2018-12-25 | Andrew H. Silverstein | Automated music composition and generation system driven by emotion-type and style-type musical experience descriptors |
US11657787B2 (en) | 2015-09-29 | 2023-05-23 | Shutterstock, Inc. | Method of and system for automatically generating music compositions and productions using lyrical input and music experience descriptors |
US10262641B2 (en) | 2015-09-29 | 2019-04-16 | Amper Music, Inc. | Music composition and generation instruments and music learning systems employing automated music composition engines driven by graphical icon based musical experience descriptors |
US11651757B2 (en) | 2015-09-29 | 2023-05-16 | Shutterstock, Inc. | Automated music composition and generation system driven by lyrical input |
US11468871B2 (en) | 2015-09-29 | 2022-10-11 | Shutterstock, Inc. | Automated music composition and generation system employing an instrument selector for automatically selecting virtual instruments from a library of virtual instruments to perform the notes of the composed piece of digital music |
US10311842B2 (en) * | 2015-09-29 | 2019-06-04 | Amper Music, Inc. | System and process for embedding electronic messages and documents with pieces of digital music automatically composed and generated by an automated music composition and generation engine driven by user-specified emotion-type and style-type musical experience descriptors |
US11011144B2 (en) * | 2015-09-29 | 2021-05-18 | Shutterstock, Inc. | Automated music composition and generation system supporting automated generation of musical kernels for use in replicating future music compositions and production environments |
US10467998B2 (en) * | 2015-09-29 | 2019-11-05 | Amper Music, Inc. | Automated music composition and generation system for spotting digital media objects and event markers using emotion-type, style-type, timing-type and accent-type musical experience descriptors that characterize the digital music to be automatically composed and generated by the system |
US11430418B2 (en) | 2015-09-29 | 2022-08-30 | Shutterstock, Inc. | Automatically managing the musical tastes and preferences of system users based on user feedback and autonomous analysis of music automatically composed and generated by an automated music composition and generation system |
US11037539B2 (en) | 2015-09-29 | 2021-06-15 | Shutterstock, Inc. | Autonomous music composition and performance system employing real-time analysis of a musical performance to automatically compose and perform music to accompany the musical performance |
US11037540B2 (en) * | 2015-09-29 | 2021-06-15 | Shutterstock, Inc. | Automated music composition and generation systems, engines and methods employing parameter mapping configurations to enable automated music composition and generation |
US20200168189A1 (en) * | 2015-09-29 | 2020-05-28 | Amper Music, Inc. | Method of automatically confirming the uniqueness of digital pieces of music produced by an automated music composition and generation system while satisfying the creative intentions of system users |
US20200168190A1 (en) * | 2015-09-29 | 2020-05-28 | Amper Music, Inc. | Automated music composition and generation system supporting automated generation of musical kernels for use in replicating future music compositions and production environments |
US10672371B2 (en) * | 2015-09-29 | 2020-06-02 | Amper Music, Inc. | Method of and system for spotting digital media objects and event markers using musical experience descriptors to characterize digital music to be automatically composed and generated by an automated music composition and generation engine |
US20170263227A1 (en) * | 2015-09-29 | 2017-09-14 | Amper Music, Inc. | Automated music composition and generation system driven by emotion-type and style-type musical experience descriptors |
US10854180B2 (en) | 2015-09-29 | 2020-12-01 | Amper Music, Inc. | Method of and system for controlling the qualities of musical energy embodied in and expressed by digital music to be automatically composed and generated by an automated music composition and generation engine |
US11030984B2 (en) * | 2015-09-29 | 2021-06-08 | Shutterstock, Inc. | Method of scoring digital media objects using musical experience descriptors to indicate what, where and when musical events should appear in pieces of digital music automatically composed and generated by an automated music composition and generation system |
US11017750B2 (en) * | 2015-09-29 | 2021-05-25 | Shutterstock, Inc. | Method of automatically confirming the uniqueness of digital pieces of music produced by an automated music composition and generation system while satisfying the creative intentions of system users |
CN105893460A (en) * | 2016-03-22 | 2016-08-24 | 上海班砖网络科技有限公司 | Automatic music composing method and device based on artificial intelligence technology |
US10891928B2 (en) * | 2017-04-26 | 2021-01-12 | Microsoft Technology Licensing, Llc | Automatic song generation |
US20200035209A1 (en) * | 2017-04-26 | 2020-01-30 | Microsoft Technology Licensing Llc | Automatic song generation |
WO2018200267A1 (en) * | 2017-04-26 | 2018-11-01 | Microsoft Technology Licensing, Llc | Automatic song generation |
US20200005743A1 (en) * | 2017-07-18 | 2020-01-02 | Vertical Craft, LLC | Music composition tools on a single pane-of-glass |
US10468001B2 (en) | 2017-07-18 | 2019-11-05 | Vertical Craft, LLC | Music composition tools on a single pane-of-glass |
US10043502B1 (en) * | 2017-07-18 | 2018-08-07 | Vertical Craft, LLC | Music composition tools on a single pane-of-glass |
US10854181B2 (en) * | 2017-07-18 | 2020-12-01 | Vertical Craft, LLC | Music composition tools on a single pane-of-glass |
US10971123B2 (en) * | 2017-07-18 | 2021-04-06 | Vertical Craft, LLC | Music composition tools on a single pane-of-glass |
US10311843B2 (en) * | 2017-07-18 | 2019-06-04 | Vertical Craft | Music composition tools on a single pane-of-glass |
US20190279607A1 (en) * | 2017-07-18 | 2019-09-12 | Vertical Craft, LLC | Music composition tools on a single pane-of-glass |
CN109448697A (en) * | 2018-10-08 | 2019-03-08 | 平安科技(深圳)有限公司 | Poem melody generation method, electronic device and computer readable storage medium |
CN111465979A (en) * | 2018-10-19 | 2020-07-28 | 索尼公司 | Information processing method, information processing apparatus, and information processing program |
CN109741724A (en) * | 2018-12-27 | 2019-05-10 | 歌尔股份有限公司 | Make the method, apparatus and intelligent sound of song |
CN112185321A (en) * | 2019-06-14 | 2021-01-05 | 微软技术许可有限责任公司 | Song generation |
US10964299B1 (en) | 2019-10-15 | 2021-03-30 | Shutterstock, Inc. | Method of and system for automatically generating digital performances of music compositions using notes selected from virtual musical instruments based on the music-theoretic states of the music compositions |
US11037538B2 (en) | 2019-10-15 | 2021-06-15 | Shutterstock, Inc. | Method of and system for automated musical arrangement and musical instrument performance style transformation supported within an automated music performance system |
US11024275B2 (en) | 2019-10-15 | 2021-06-01 | Shutterstock, Inc. | Method of digitally performing a music composition using virtual musical instruments having performance logic executing within a virtual musical instrument (VMI) library management system |
US11948542B2 (en) * | 2020-01-31 | 2024-04-02 | Obeebo Labs Ltd. | Systems, devices, and methods for computer-generated musical note sequences |
CN112309435A (en) * | 2020-10-30 | 2021-02-02 | 北京有竹居网络技术有限公司 | Method and device for generating main melody, electronic equipment and storage medium |
CN112951187A (en) * | 2021-03-24 | 2021-06-11 | 平安科技(深圳)有限公司 | Van-music generation method, device, equipment and storage medium |
CN113539216A (en) * | 2021-06-29 | 2021-10-22 | 广州酷狗计算机科技有限公司 | Melody creation navigation method and device, equipment, medium and product thereof |
Also Published As
Publication number | Publication date |
---|---|
US9620092B2 (en) | 2017-04-11 |
CN103902642B (en) | 2017-11-10 |
CN103902642A (en) | 2014-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9620092B2 (en) | Composition using correlation between melody and lyrics | |
WO2020015153A1 (en) | Method and device for generating music for lyrics text, and computer-readable storage medium | |
JP2021197133A (en) | Meaning matching method, device, electronic apparatus, storage medium, and computer program | |
JP2016502701A (en) | Ranking for recursive synthesis of string transformations. | |
EP3477643B1 (en) | Audio fingerprint extraction and audio recognition using said fingerprints | |
Wang et al. | To catch a chorus, verse, intro, or anything else: Analyzing a song with structural functions | |
US8609969B2 (en) | Automatically acquiring feature segments in a music file | |
CN115132209B (en) | Speech recognition method, apparatus, device and medium | |
CN115438709A (en) | Code Similarity Detection Method Based on Code Attribute Graph | |
JP2021197179A (en) | Entity identification method, device, and computer readable storage medium | |
US20210027761A1 (en) | Automatic translation using deep learning | |
Herremans et al. | Composer classification models for music-theory building | |
CN110164460A (en) | Sing synthetic method and device | |
Kolozali et al. | Automatic ontology generation for musical instruments based on audio analysis | |
JP2020003535A (en) | Program, information processing method, electronic device, and trained model | |
Sun et al. | Emotion painting: lyric, affect, and musical relationships in a large lead-sheet corpus | |
Lv et al. | Re-creation of creations: A new paradigm for lyric-to-melody generation | |
CN113870818B (en) | Training method, device, medium and computing equipment for song chord arrangement model | |
CN111863030A (en) | Audio detection method and device | |
Ma et al. | Robust Melody Track Identification in Symbolic Music. | |
Dixon et al. | Probabilistic and logic-based modelling of harmony | |
Wei | [Retracted] Intonation Characteristics of Singing Based on Artificial Intelligence Technology and Its Application in Song‐on‐Demand Scoring System | |
Tang et al. | Harmonic Classification with Enhancing Music Using Deep Learning Techniques | |
CN115357712A (en) | Aspect-level sentiment analysis method, device, electronic equipment and storage medium | |
Zhang | Music Data Feature Analysis and Extraction Algorithm Based on Music Melody Contour |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: THE HONG KONG UNIVERSITY OF SCIENCE AND TECHNOLOGY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WONG, CHI WING;SZE, RAYMOND KA WAI;LONG, CHENG;REEL/FRAME:031704/0665 Effective date: 20131128 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2552); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 8 |