US20190147863A1 - Method and apparatus for playing multimedia - Google Patents

Method and apparatus for playing multimedia Download PDF

Info

Publication number
US20190147863A1
US20190147863A1 US15/858,538 US201715858538A US2019147863A1 US 20190147863 A1 US20190147863 A1 US 20190147863A1 US 201715858538 A US201715858538 A US 201715858538A US 2019147863 A1 US2019147863 A1 US 2019147863A1
Authority
US
United States
Prior art keywords
play
multimedia
voice
user
list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/858,538
Inventor
Guang Lu
Shiquan YE
Xiajun LUO
Xiangjie YIN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Shanghai Xiaodu Technology Co Ltd
Original Assignee
Baidu Online Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu Online Network Technology Beijing Co Ltd filed Critical Baidu Online Network Technology Beijing Co Ltd
Assigned to BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD. reassignment BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LU, GUANG, LUO, XIAJUN, YE, SHIQUAN, YIN, XIANGJIE
Publication of US20190147863A1 publication Critical patent/US20190147863A1/en
Assigned to BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD., SHANGHAI XIAODU TECHNOLOGY CO. LTD. reassignment BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • G06F16/4387Presentation of query results by the use of playlists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • G06F16/4387Presentation of query results by the use of playlists
    • G06F16/4393Multimedia presentations, e.g. slide shows, multimedia albums
    • G06F17/30056
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Definitions

  • Embodiments of the present disclosure relate to the field of computer technologies, generally to the field of computer network technologies, and in particular to a method and apparatus for playing multimedia.
  • the terminal is capable of satisfying a real-time retrieval and playback in response to a voice input of the user.
  • the smart terminal will interrupt the current song playing state, and then change the current playing multimedia content based on the interpretation of the user's voice.
  • An object of the embodiments of the present disclosure is to provide a method and an apparatus for playing multimedia.
  • embodiments of the present disclosure provide a method for playing multimedia.
  • the method comprises: receiving a voice play request inputted by a user; extracting a scheduled play timing and a play parameter from the voice play request; generating a multimedia list based on the play parameter; and playing multimedia in the multimedia list, in response to a current timing meeting the scheduled play timing.
  • the scheduled play timing comprises one or more of: a sorting position, a play time and a play scene of the multimedia.
  • the play parameter comprises one or more of following parameters of the multimedia: a name, a leading creative staff, a thematic multimedia list, a list of interested multimedia, a language, a style, a scene, an emotion, and a theme.
  • the method further comprises: feeding back response information of the voice play request by voice to the user.
  • generating a to-be-played song playlist based on the play parameter comprises: generating the to-be-played song playlist based on the play parameter and one or more of: time—contemporary popularity of the multimedia, a portrait of the user, and feedback data of a preference of the user.
  • the feeding back response information of the voice play request by voice to the user comprises one or more of: feeding back received instruction information by voice, in response to generating the multimedia list; feeding back finding no relevant songs by voice to the user, in response to any one of: no play parameter being extracted from the play voice request; and a to-be-played song list not being generated based on the play parameters; and feeding back the multimedia requested by the user having no copyright by voice, in response to a multimedia library not having a multimedia version meeting the play parameter.
  • the receiving a voice play request inputted by a user comprises: receiving a wake-up instruction inputted by the user; and feeding back response information by voice and receiving the voice play request inputted by the user.
  • the embodiments of the present disclosure provide an apparatus for playing multimedia.
  • the apparatus comprises: a receiving unit for receiving a voice play request inputted by a user; an extracting unit for extracting a scheduled play timing and a play parameter from the voice play request; a generating unit for generating a multimedia list based on the play parameter; and a playing unit for playing multimedia in the multimedia list, in response to a current timing meeting the scheduled play timing.
  • the scheduled play timing extracted by the extracting unit comprises one or more of: a sorting position, a play time and a play scene of the multimedia.
  • the play parameter extracted by the extracting unit comprises one or more of following parameters of the multimedia: a name, a leading creative staff, a thematic multimedia list, a list of interested multimedia, a language, a style, a scene, an emotion, and a theme.
  • the apparatus further comprises a feedback unit for feeding back response information of the voice play request by voice to the user.
  • the generating unit is further configured for generating a to-be-played song playlist based on the play parameter and one or more of: time—contemporary popularity of the multimedia, a portrait of the user, and feedback data of a preference of the user.
  • the feedback unit is further configured for one or more of: feeding back received instruction information by voice, in response to generating the multimedia list; feeding back finding no relevant songs by voice to the user, in response to any one of: no play parameter being extracted from the play voice request; and a to-be-played song list not being generated based on the play parameters; and feeding back the multimedia requested by the user having no copyright by voice, in response to a multimedia library not having a multimedia version meeting the play parameter.
  • the receiving unit comprises: a wake-up subunit for receiving a wake-up instruction inputted by the user; a feedback subunit for feeding back response information by voice; and a receiving subunit for receiving a voice play request inputted by the user.
  • the embodiments of the present disclosure provide an apparatus comprising one or more processors; a storage device for storing one or more programs; wherein the one or more processors implements the method for playing multimedia according to any of the foregoing, when the one or more programs are executed by the one or more processors.
  • the embodiments of the present disclosure provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method for playing multimedia according to any of the foregoing.
  • the embodiments of the present disclosure provide a method and an apparatus for playing multimedia. Firstly, a voice play request inputted by a user is received; afterwards, a scheduled play timing and a play parameter are extracted from the voice play request; afterwards, a multimedia list is generated based on the play parameters; and the multimedia in the multimedia list is played in response to a current timing meeting the scheduled play timing. During this process, the multimedia in the multimedia list may be played at the scheduled play timing based on the play request made by the user by voice, so as to improve the accuracy and pertinence of the played multimedia.
  • FIG. 1 illustrates an exemplary system architecture diagram for implementing embodiments of a method or an apparatus of testing service logic of the present disclosure
  • FIG. 2 is a schematic flowchart of an embodiment of a method for playing multimedia in accordance with the present disclosure
  • FIG. 3 is a schematic flowchart of an disclosure scenario of a method for playing multimedia in accordance with the present disclosure
  • FIG. 4 is an exemplary structural diagram of an embodiment of a method for playing multimedia in accordance with the present disclosure.
  • FIG. 5 is a structural diagram of a computer system suitable for implementing a terminal device or a server of the present disclosure.
  • FIG. 1 shows an exemplary architecture of a system 100 which may be used by a method for playing multimedia or an apparatus for playing multimedia according to the embodiments of the present application.
  • the system architecture 100 may include terminal devices 101 , 102 and 103 , a network 104 and a server 105 .
  • the network 104 serves as a medium providing a communication link between the terminal devices 101 , 102 and 103 and the server 105 .
  • the network 104 may include various types of connections, such as wired or wireless transmission links, or optical fibers.
  • the user 110 may use the terminal devices 101 , 102 and 103 to interact with the server 105 through the network 104 , in order to transmit or receive messages, etc.
  • Various communication client applications such as search engine applications, shopping applications, instant messaging tools, mailbox clients, social platform software and audio-visual playing application may be installed on the terminal devices 101 , 102 and 103 .
  • the terminal devices 101 , 102 and 103 may be various electronic devices having displays, including but not limited to, smart speaker, smart phones, wearable devices, tablet computers, e-book readers, MP3 (Moving Picture Experts Group Audio Layer III) players, MP4 (Moving Picture Experts Group Audio Layer IV) players, laptop computers and desktop computers.
  • displays including but not limited to, smart speaker, smart phones, wearable devices, tablet computers, e-book readers, MP3 (Moving Picture Experts Group Audio Layer III) players, MP4 (Moving Picture Experts Group Audio Layer IV) players, laptop computers and desktop computers.
  • the servers 105 , 106 may be servers providing various services, for example, a backend server providing support to the terminal devices 101 , 102 or 103 .
  • the backend server may process the data from the terminal devices through analysis and calculation, and push the analysis and calculation results to the terminal devices.
  • the method for playing multimedia is generally executed by the servers 105 , 106 or the terminal devices 101 , 102 , and 103 . Accordingly, an apparatus for playing multimedia is generally installed on the server servers 105 , 106 or the terminal devices 101 , 102 , and 103 .
  • terminal devices the number of the terminal devices, networks and servers is illustrative only. Any number of terminal devices, networks and servers may be possible, depending on the practical needs.
  • FIG. 2 illustrates a schematic flowchart of an embodiment of a method for playing multimedia in accordance with the present disclosure.
  • a method 200 for playing multimedia comprises:
  • a voice play request input by a user is received.
  • an electronic device that runs a method for playing multimedia may receive a voice play request inputted by a user via a microphone of the terminal device.
  • This voice play request is used to indicate the terminal device to play multimedia, contents of which may be audio contents, video contents, or a combination of audio contents and video contents.
  • receiving the voice play request inputted by the user may include: receiving a wake-up instruction inputted by the user first, and feeding back response information by voice and receiving the voice play request inputted by the user.
  • the terminal device may receive a voice input “Little A” by a user, wherein “Little A” is a predetermined wake-up instruction. Afterwards, the terminal device feeds back “hey!” by voice to the user. Then, the user inputs a voice play request of “play the next song CCC of BB,” wherein “the next song” is the play timing, BB and CCC are play parameters, where BB is the name of the singer, and CCC is the song title.
  • a scheduled play timing and a play parameter are extracted from the voice play request.
  • an electronic device that runs a method for playing multimedia recognizes a voice play request as text, then performs semantic analysis on the text to obtain the semantics included in the voice play request, and may then extract the scheduled play timing hitting a play timing semantic slot, and the play parameters hitting a play parameter semantic slot.
  • the play parameters are the parameters used to filter the multimedia, such as multimedia name or multimedia style.
  • the scheduled play timing may comprise one or more of: a sorting position, a play time and a play scene of the multimedia.
  • the sorting position of the multimedia refers to the position of the multimedia in the current playlist, such as, “the next song,” “the 20th song,”.
  • the play time refers to the time of the multimedia playing, such as, “eight o'clock in the morning,” “ten o'clock in the evening,” and “one o'clock at noon each day,””.
  • the play scene refers to scenes where multimedia needs to be played, such as a speed of a vehicle, location-based services, congestion status, mileage status, weather, current news, emotions and crowds, in a specific example, the play scene maybe “when found that I am sleepy,” “when in a traffic jam,” or “when it rains,” and the like.
  • the sorting position and the play time of the multimedia may specifically indicate the scheduled play timing.
  • the play scene need to be inputted by the user by voice,for example, the user says “Little A (the name of the terminal device), the traffic jam is annoying,” or determined by the terminal device based on the data collected by the device, for example, whether the user is in a drowsy state is determined based on an image, a sound,a pulse or the like collected by the terminal device, whether there is a traffic jam at present is determined based on the location information of the terminal device or the location-based service provided by the automobile manufacturer of the integrated terminal device, and whether it is raining is determined based on the weather forecast disclosed by the Internet and the current location information of the terminal device.
  • the play parameter may comprise one or more of following parameters of the multimedia: a name, a leading author, a thematic multimedia list, a list of interested multimedia, a language, a style, a scene, an emotion, and a theme.
  • the play parameter may comprise a name of the multimedia, a leading creative staff, a thematic multimedia list, a list of interested multimedia, a language, a style, a scene, an emotion, and a theme, and the like.
  • the multimedia is a song in an audio.
  • the name of the multimedia in the play parameter may be a song title.
  • the leading author may be a singer, a songwriter or a composer.
  • the thematic multimedia list may be an album.
  • the list of interested multimedia may be a song-list.
  • the language may be Chinese, Cantonese, English, Japanese, Korean, German, French, or other languages.
  • the style maybe popular, rock, folk, electronic, dance music, rap, light music, jazz, country, black music, classic, national, British, metal, punk, blues, reggae, Latin, alternative, new age, classical, post rock, new jazz and the like.
  • the scene may be early morning, night, study, work, lunch break, afternoon tea, subway, driving, sport, travel, walking, bar, and the like.
  • the emotion may be reminiscence, freshness, romance, sexy, sadness, healing, relaxing, loneliness, touching, excitement, happiness, quietness, missing, and the like.
  • the theme may be original soundtrack, cartoon, campus, game, 1970s, 1980s, 1990s, network song, KTV, classic, imitation, guitar, piano, instrumental music, child, list, 2000s and so on.
  • a multimedia list is generated based on the play parameters.
  • the multimedia matching the play parameters may be extracted from the multimedia library or the network data based on the play parameters extracted from the voice play request. For example, if the play parameters extracted from the voice play request are “English,” “Country” and “Songs,” songs matching both “English” and “Country” may be extracted from the music library to generate a song playlist.
  • generating a list of multimedia based on the play parameter comprises: generating a to-be-played song playlist based on the play parameter and one or more of: time—contemporary popularity of the multimedia, a user portrait, and user preference feedback data.
  • both the user portrait and the user preference feedback data may be obtained based on the big data or the historical interaction data of the user.
  • a personalized multimedia list that better matches the user preference may be filtered out by referring to the user portrait and preference feedback data input by the user based on the play parameters, so as to improve the relevance of the multimedia in the multimedia list.
  • multimedia in the multimedia list is played, in response to a current timing meeting the scheduled play timing.
  • the multimedia in the multimedia list may be played via a speaker of the terminal device in response to the terminal device monitoring that the current condition meets the scheduled play timing. For example, if the scheduled play timing extracted from the voice play request is “eight o'clock in the morning, ” the multimedia in the multimedia list may be played when the terminal device detects that the current time is eight o'clock in the morning.
  • a history playlist before the played multimedia list may be reserved so as to still be capable of returning to the content in the history playlist, when the user inputs the play request of “previous song.”
  • the method for playing multimedia described above may further include: feeding back response information of the voice play request to the user by voice.
  • a voice may be used to response to the play request to the user, so that the user can receive the feedback of the terminal device in a timely and convenient manner. For example, after receiving a voice play request by a user and generating a multimedia list, “alright” may be fed back to the user. Otherwise, if extracting the play parameters fails, “sorry, no relevant songs are found” is fed back to the user.
  • the feeding back response information of the voice play request by voice to the user comprises: feeding back received instruction information by voice, in response to generating the multimedia list; feeding back finding no relevant songs by voice to the user, in response to any one of: no play parameter being extracted from the play voice request;
  • response information (such as “alright,” “no problem,” “OK”) of the voice play request may be fed back by voice to the user, in response to generating the multimedia list.
  • Finding no relevant songs is fed back by voice to the user, in response to no play parameter being extracted from the play voice request; or to a to-be-played song list not being generated based on the play parameters. For example, if the play parameter in the voice play request for the user is “Eight Miles Incense of XX,” no multimedia in the multimedia library would satisfy the expression, as a result, “no relevant songs are found” is fed back.
  • the multimedia requested by the user having no copyright is fed back by voice, in response to a multimedia library not having a multimedia version meeting the play parameter. For example, “the related song has no copyright” is fed back to the user.
  • the above-described embodiments of the present disclosure provide a method for playing multimedia, the method extracts a scheduled play timing and a play parameter based on the voice play request by the user, and play multimedia that meet a play parameter on the scheduled play timing, so that the played multimedia better meets the needs of the user, so as to enhance the accuracy and pertinence of the multimedia played to the user.
  • FIG. 3 illustrates a schematic flowchart of an application scenario of a method for playing multimedia in accordance with the present disclosure.
  • the method 300 for playing multimedia is operated in the smart speaker 320 , and the method may include:
  • a voice play request 301 inputted by a user is first received.
  • a scheduled play timing 302 “next song” and a play parameter 303 “ABC” are then extracted from the voice play request 301 : “play the next song ABC.”
  • a multimedia list 304 is then generated based on the play parameter 303 “ABC”: which may include a single song ABC, a cover version ABC, and similar songs.
  • a multimedia 305 in the multimedia list 304 is played, in response to a current timing being the current played song, meeting the scheduled play timing 302 “Next.”
  • the method for playing multimedia shown in FIG. 3 is merely an exemplary embodiment of the method for playing multimedia, and is not intended to limit the embodiments of the present disclosure.
  • response information of the voice play request may be fed back by voice to the user.
  • generating a to-be-played song playlist based on the play parameter may comprise: generating the to-be-played song playlist based on the play parameter and one or more of: time—contemporary popularity of the multimedia, a portrait of the user, and feedback data of a preference of the user.
  • the present disclosure provides an embodiment of an apparatus for playing multimedia, and the embodiment of the apparatus for playing multimedia corresponds to the embodiment of the method for playing multimedia shown in FIG. 1 to FIG. 3 . Therefore, the operations and features described above with respect to the method for playing multimedia in FIG. 1 to FIG. 3 are also applicable to an apparatus for playing multimedia 400 and the units contained therein, and will not be reiterated here.
  • the apparatus for playing multimedia 400 includes: a receiving unit 410 for receiving a voice play request inputted by a user; an extracting unit 420 for extracting a scheduled play timing and a play parameter from the voice play request; a generating unit 430 for generating a multimedia list based on the play parameter; and a playing unit 440 for playing multimedia in the multimedia list, in response to a current timing meeting the scheduled play timing.
  • the scheduled play timing extracted by the extracting unit 420 include one or more of: a sorting position, a play time and a play scene of the multimedia.
  • the play parameter extracted by the extracting unit 420 comprises one or more of following parameters of the multimedia: a name, a leading author, a thematic multimedia list, a list of interested multimedia, a language, a style, a scene, an emotion, and a theme.
  • the apparatus 400 further comprises: a feedback unit 450 for feeding back response information of the voice play request by voice to the user.
  • the generating unit 430 is further configured for: generating a to-be-played song playlist based on the play parameter and one or more of: time—contemporary popularity of the multimedia, a user portrait, and user preference feedback data.
  • the feedback unit 450 is further configured for one or more of: feeding back received instruction information by voice, in response to generating the multimedia list; feeding back finding no relevant songs by voice to the user, in response to any one of: no play parameter being extracted from the play voice request; and a to-be-played song list not being generated based on the play parameters; and feeding back the multimedia requested by the user having no copyright by voice, in response to a multimedia library not having a multimedia version meeting the play parameter.
  • the receiving unit 410 comprises: a wake-up subunit 411 for receiving a wake-up instruction inputted by the user; a feedback subunit 412 for feeding back response information by voice; and a receiving subunit 413 for receiving a voice play request inputted by the user.
  • the present disclosure also provides an apparatus comprising: one or more processors; a storage device for storing one or more programs; wherein the one or more processors implements the method for playing multimedia according to any of the foregoing, when the one or more programs are executed by the one or more processors.
  • the present disclosure further provides an embodiment of a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method for playing multimedia according to any of the foregoing.
  • FIG. 5 illustrates a structural diagram of a computer system 500 suitable for implementing a terminal device or a server of an embodiment of the present disclosure.
  • the terminal device shown in FIG. 5 is merely an example, and should not impose any limitation on the function and the scope of use of the embodiments of the present disclosure.
  • the computer system 500 includes a central processing unit (CPU) 501 , which may execute various appropriate actions and processes in accordance with a program stored in a read-only memory (ROM) 502 or a program loaded into a random access memory (RAM) 503 from a storage portion 508 .
  • the RAM 503 also stores various programs and data required by operations of the system 500 .
  • the CPU 501 , the ROM 502 and the RAM 503 are connected to each other through a bus 504 .
  • An input/output (I/O) interface 505 is also connected to the bus 504 .
  • the following components are connected to the I/O interface 505 : an input portion 506 including a keyboard, a mouse etc.; an output portion 507 comprising a cathode ray tube (CRT), a liquid crystal display device (LCD), a speaker etc.; a storage portion 508 including a hard disk and the like; and a communication portion 509 comprising a network interface card, such as a LAN card and a modem.
  • the communication portion 509 performs communication processes via a network, such as the Internet.
  • a driver 510 is also connected to the I/O interface 505 as required.
  • a removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, and a semiconductor memory, may be installed on the driver 510 , to facilitate the retrieval of a computer program from the removable medium 511 , and the installation thereof on the storage portion 508 as needed.
  • an embodiment of the present disclosure includes a computer program product, which comprises a computer program that is tangibly embedded in a machine-readable medium.
  • the computer program comprises program codes for executing the method as illustrated in the flow chart.
  • the computer program may be downloaded and installed from a network via the communication portion 509 , and/or may be installed from the removable media 511 .
  • the computer program when executed by the central processing unit (CPU) 501 , implements the above mentioned functionalities as defined by the methods of the present disclosure.
  • the computer readable medium in the present disclosure maybe computer readable signal medium or computer readable storage medium or any combination of the above two.
  • An example of the computer readable storage medium may include, but not limited to: electric, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, elements, or a combination any of the above.
  • a more specific example of the computer readable storage medium may include but is not limited to: electrical connection with one or more wire, a portable computer disk, a hard disk, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory (EPROM or flash memory), a fibre, a portable compact disk read only memory (CD-ROM), an optical memory, a magnet memory or any suitable combination of the above.
  • the computer readable storage medium may be any physical medium containing or storing programs which can be used by a command execution system, apparatus or element or incorporated thereto.
  • the computer readable signal medium may include data signal in the base band or propagating as parts of a carrier, in which computer readable program codes are carried.
  • the propagating signal may take various forms, including but not limited to: an electromagnetic signal, an optical signal or any suitable combination of the above.
  • the signal medium that can be read by computer may be any computer readable medium except for the computer readable storage medium.
  • the computer readable medium is capable of transmitting, propagating or transferring programs for use by, or used in combination with, a command execution system, apparatus or element.
  • the program codes contained on the computer readable medium may be transmitted with any suitable medium including but not limited to: wireless, wired, optical cable, RF medium etc., or any suitable combination of the above.
  • each of the blocks in the flow charts or block diagrams may represent a module, a program segment, or a code portion, said module, program segment, or code portion comprising one or more executable instructions for implementing specified logic functions.
  • the functions denoted by the blocks may occur in a sequence different from the sequences shown in the figures. For example, any two blocks presented in succession may be executed, substantially in parallel, or they may sometimes be in a reverse sequence, depending on the function involved.
  • each block in the block diagrams and/or flow charts as well as a combination of blocks may be implemented using a dedicated hardware-based system executing specified functions or operations, or by a combination of a dedicated hardware and computer instructions.
  • the units or modules involved in the embodiments of the present disclosure may be implemented by means of software or hardware.
  • the described units or modules may also be provided in a processor, for example, described as: a processor, comprising a receiving unit, an extracting unit, a generating unit and a paying unit, where the names of these units or modules do not in some cases constitute a limitation to such units or modules themselves.
  • the receiving unit may also be described as “a unit for receiving a voice play request inputted by a user.”
  • the present disclosure further provides a non-volatile computer-readable storage medium.
  • the non-volatile computer-readable storage medium may be the non-volatile computer storage medium included in the apparatus in the above described embodiments, or a stand-alone non-volatile computer-readable storage medium not assembled into the apparatus.
  • the non-volatile computer-readable storage medium stores one or more programs. The one or more programs, when executed by a device, cause the device to: receive a voice play request inputted by a user; extract a scheduled play timing and a play parameter from the voice play request; generate a multimedia list based on the play parameter; and play multimedia in the multimedia list, in response to a current timing meeting the scheduled play timing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Circuits Of Receivers In General (AREA)

Abstract

Embodiments of the present disclosure disclose methods and apparatuses for playing multimedia. A specific implementation of the methods includes: receiving a voice play request inputted by a user; extracting a scheduled play timing and a play parameter from the voice play request; generating a multimedia list based on the play parameter; and playing multimedia in the multimedia list, in response to a current timing meeting the scheduled play timing. The embodiment improves the quality and pertinence of the played multimedia.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application claims the priority of Chinese Patent Application No. 20171119577.4, entitled “Method and Apparatus for Playing Multimedia,” filed on Nov. 14, 2017, the content of which is incorporated herein by reference in its entirety.
  • TECHNICAL FIELD
  • Embodiments of the present disclosure relate to the field of computer technologies, generally to the field of computer network technologies, and in particular to a method and apparatus for playing multimedia.
  • BACKGROUND
  • With the coming of the network age, increasingly more users tend to accept intelligent services. Take the audio-visual service as an example, one expects that smart terminals are capable to comprehend user's voice inputs and provide the user with some personalized audio-visual services based on the interpretation of the user's voice.
  • At present, in an audio-visual voice interaction scenario involving a smart terminal, the terminal is capable of satisfying a real-time retrieval and playback in response to a voice input of the user. Responding to the user's ad hoc needs, the smart terminal will interrupt the current song playing state, and then change the current playing multimedia content based on the interpretation of the user's voice.
  • SUMMARY
  • An object of the embodiments of the present disclosure is to provide a method and an apparatus for playing multimedia.
  • In the first aspect, embodiments of the present disclosure provide a method for playing multimedia. The method comprises: receiving a voice play request inputted by a user; extracting a scheduled play timing and a play parameter from the voice play request; generating a multimedia list based on the play parameter; and playing multimedia in the multimedia list, in response to a current timing meeting the scheduled play timing.
  • In some embodiments, the scheduled play timing comprises one or more of: a sorting position, a play time and a play scene of the multimedia.
  • In some embodiments, the play parameter comprises one or more of following parameters of the multimedia: a name, a leading creative staff, a thematic multimedia list, a list of interested multimedia, a language, a style, a scene, an emotion, and a theme.
  • In some embodiments, the method further comprises: feeding back response information of the voice play request by voice to the user.
  • In some embodiments, generating a to-be-played song playlist based on the play parameter comprises: generating the to-be-played song playlist based on the play parameter and one or more of: time—contemporary popularity of the multimedia, a portrait of the user, and feedback data of a preference of the user.
  • In some embodiments, the feeding back response information of the voice play request by voice to the user comprises one or more of: feeding back received instruction information by voice, in response to generating the multimedia list; feeding back finding no relevant songs by voice to the user, in response to any one of: no play parameter being extracted from the play voice request; and a to-be-played song list not being generated based on the play parameters; and feeding back the multimedia requested by the user having no copyright by voice, in response to a multimedia library not having a multimedia version meeting the play parameter.
  • In some embodiments, the receiving a voice play request inputted by a user comprises: receiving a wake-up instruction inputted by the user; and feeding back response information by voice and receiving the voice play request inputted by the user.
  • In the second aspect, the embodiments of the present disclosure provide an apparatus for playing multimedia. The apparatus comprises: a receiving unit for receiving a voice play request inputted by a user; an extracting unit for extracting a scheduled play timing and a play parameter from the voice play request; a generating unit for generating a multimedia list based on the play parameter; and a playing unit for playing multimedia in the multimedia list, in response to a current timing meeting the scheduled play timing.
  • In some embodiments, the scheduled play timing extracted by the extracting unit comprises one or more of: a sorting position, a play time and a play scene of the multimedia.
  • In some embodiments, the play parameter extracted by the extracting unit comprises one or more of following parameters of the multimedia: a name, a leading creative staff, a thematic multimedia list, a list of interested multimedia, a language, a style, a scene, an emotion, and a theme.
  • In some embodiments, the apparatus further comprises a feedback unit for feeding back response information of the voice play request by voice to the user.
  • In some embodiments, the generating unit is further configured for generating a to-be-played song playlist based on the play parameter and one or more of: time—contemporary popularity of the multimedia, a portrait of the user, and feedback data of a preference of the user.
  • In some embodiments, the feedback unit is further configured for one or more of: feeding back received instruction information by voice, in response to generating the multimedia list; feeding back finding no relevant songs by voice to the user, in response to any one of: no play parameter being extracted from the play voice request; and a to-be-played song list not being generated based on the play parameters; and feeding back the multimedia requested by the user having no copyright by voice, in response to a multimedia library not having a multimedia version meeting the play parameter.
  • In some embodiments, the receiving unit comprises: a wake-up subunit for receiving a wake-up instruction inputted by the user; a feedback subunit for feeding back response information by voice; and a receiving subunit for receiving a voice play request inputted by the user.
  • In the third aspect, the embodiments of the present disclosure provide an apparatus comprising one or more processors; a storage device for storing one or more programs; wherein the one or more processors implements the method for playing multimedia according to any of the foregoing, when the one or more programs are executed by the one or more processors.
  • In the fourth aspect, the embodiments of the present disclosure provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method for playing multimedia according to any of the foregoing.
  • The embodiments of the present disclosure provide a method and an apparatus for playing multimedia. Firstly, a voice play request inputted by a user is received; afterwards, a scheduled play timing and a play parameter are extracted from the voice play request; afterwards, a multimedia list is generated based on the play parameters; and the multimedia in the multimedia list is played in response to a current timing meeting the scheduled play timing. During this process, the multimedia in the multimedia list may be played at the scheduled play timing based on the play request made by the user by voice, so as to improve the accuracy and pertinence of the played multimedia.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Additional features, objects and advantages of the embodiments of the present disclosure will become more apparent by reading the detailed description of non-limiting embodiments with reference to the following drawings:
  • FIG. 1 illustrates an exemplary system architecture diagram for implementing embodiments of a method or an apparatus of testing service logic of the present disclosure;
  • FIG. 2 is a schematic flowchart of an embodiment of a method for playing multimedia in accordance with the present disclosure;
  • FIG. 3 is a schematic flowchart of an disclosure scenario of a method for playing multimedia in accordance with the present disclosure;
  • FIG. 4 is an exemplary structural diagram of an embodiment of a method for playing multimedia in accordance with the present disclosure; and
  • FIG. 5 is a structural diagram of a computer system suitable for implementing a terminal device or a server of the present disclosure.
  • DETAILED DESCRIPTION OF EMBODIMENTS
  • The present application will be further described below in detail in combination with the accompanying drawings and the embodiments. It should be appreciated that the specific embodiments described herein are merely used for explaining the relevant disclosure, rather than limiting the disclosure. In addition, it should be noted that, for the ease of description, only the parts related to the relevant disclosure are shown in the accompanying drawings.
  • It should also be noted that the embodiments in the present application and the features in the embodiments may be combined with each other on a non-conflict basis. The present application will be described below in detail with reference to the accompanying drawings and in combination with the embodiments.
  • FIG. 1 shows an exemplary architecture of a system 100 which may be used by a method for playing multimedia or an apparatus for playing multimedia according to the embodiments of the present application.
  • As shown in FIG. 1, the system architecture 100 may include terminal devices 101, 102 and 103, a network 104 and a server 105. The network 104 serves as a medium providing a communication link between the terminal devices 101, 102 and 103 and the server 105. The network 104 may include various types of connections, such as wired or wireless transmission links, or optical fibers.
  • The user 110 may use the terminal devices 101, 102 and 103 to interact with the server 105 through the network 104, in order to transmit or receive messages, etc. Various communication client applications, such as search engine applications, shopping applications, instant messaging tools, mailbox clients, social platform software and audio-visual playing application may be installed on the terminal devices 101, 102 and 103.
  • The terminal devices 101, 102 and 103 may be various electronic devices having displays, including but not limited to, smart speaker, smart phones, wearable devices, tablet computers, e-book readers, MP3 (Moving Picture Experts Group Audio Layer III) players, MP4 (Moving Picture Experts Group Audio Layer IV) players, laptop computers and desktop computers.
  • The servers 105, 106 may be servers providing various services, for example, a backend server providing support to the terminal devices 101, 102 or 103. The backend server may process the data from the terminal devices through analysis and calculation, and push the analysis and calculation results to the terminal devices.
  • It should be noted that the method for playing multimedia according to the embodiments of the present disclosure is generally executed by the servers 105, 106 or the terminal devices 101, 102, and 103. Accordingly, an apparatus for playing multimedia is generally installed on the server servers 105, 106 or the terminal devices 101, 102, and 103.
  • It should be appreciated that the number of the terminal devices, networks and servers is illustrative only. Any number of terminal devices, networks and servers may be possible, depending on the practical needs.
  • With continuing reference to FIG. 2, FIG. 2 illustrates a schematic flowchart of an embodiment of a method for playing multimedia in accordance with the present disclosure.
  • As shown in FIG. 2, a method 200 for playing multimedia comprises:
  • At step 210, a voice play request input by a user is received.
  • In the embodiment, an electronic device(e.g., a server or a terminal device shown in FIG. 1) that runs a method for playing multimedia may receive a voice play request inputted by a user via a microphone of the terminal device. This voice play request is used to indicate the terminal device to play multimedia, contents of which may be audio contents, video contents, or a combination of audio contents and video contents.
  • In some alternative implementations of the embodiment, receiving the voice play request inputted by the user may include: receiving a wake-up instruction inputted by the user first, and feeding back response information by voice and receiving the voice play request inputted by the user.
  • Taking multimedia being a song in the audio contents as an example, the terminal device may receive a voice input “Little A” by a user, wherein “Little A” is a predetermined wake-up instruction. Afterwards, the terminal device feeds back “hey!” by voice to the user. Then, the user inputs a voice play request of “play the next song CCC of BB,” wherein “the next song” is the play timing, BB and CCC are play parameters, where BB is the name of the singer, and CCC is the song title.
  • At step 220, a scheduled play timing and a play parameter are extracted from the voice play request.
  • In this embodiment, an electronic device that runs a method for playing multimedia recognizes a voice play request as text, then performs semantic analysis on the text to obtain the semantics included in the voice play request, and may then extract the scheduled play timing hitting a play timing semantic slot, and the play parameters hitting a play parameter semantic slot. The play parameters are the parameters used to filter the multimedia, such as multimedia name or multimedia style.
  • In some alternative implementations of this embodiment, the scheduled play timing may comprise one or more of: a sorting position, a play time and a play scene of the multimedia.
  • In this implementation, the sorting position of the multimedia refers to the position of the multimedia in the current playlist, such as, “the next song,” “the 20th song,”. The play time refers to the time of the multimedia playing, such as, “eight o'clock in the morning,” “ten o'clock in the evening,” and “one o'clock at noon each day,””. The play scene refers to scenes where multimedia needs to be played, such as a speed of a vehicle, location-based services, congestion status, mileage status, weather, current news, emotions and crowds, in a specific example, the play scene maybe “when found that I am sleepy,” “when in a traffic jam,” or “when it rains,” and the like.
  • The sorting position and the play time of the multimedia may specifically indicate the scheduled play timing. The play scene need to be inputted by the user by voice,for example, the user says “Little A (the name of the terminal device), the traffic jam is annoying,” or determined by the terminal device based on the data collected by the device, for example, whether the user is in a drowsy state is determined based on an image, a sound,a pulse or the like collected by the terminal device, whether there is a traffic jam at present is determined based on the location information of the terminal device or the location-based service provided by the automobile manufacturer of the integrated terminal device, and whether it is raining is determined based on the weather forecast disclosed by the Internet and the current location information of the terminal device.
  • In some alternative implementations of this embodiment, the play parameter may comprise one or more of following parameters of the multimedia: a name, a leading author, a thematic multimedia list, a list of interested multimedia, a language, a style, a scene, an emotion, and a theme.
  • In this implementation manner, the play parameter may comprise a name of the multimedia, a leading creative staff, a thematic multimedia list, a list of interested multimedia, a language, a style, a scene, an emotion, and a theme, and the like.
  • In the following, as an example, the multimedia is a song in an audio. The name of the multimedia in the play parameter may be a song title. The leading author may be a singer, a songwriter or a composer. The thematic multimedia list may be an album. The list of interested multimedia may be a song-list. The language may be Chinese, Cantonese, English, Japanese, Korean, German, French, or other languages. The style maybe popular, rock, folk, electronic, dance music, rap, light music, jazz, country, black music, classic, national, British, metal, punk, blues, reggae, Latin, alternative, new age, classical, post rock, new jazz and the like. The scene may be early morning, night, study, work, lunch break, afternoon tea, subway, driving, sport, travel, walking, bar, and the like. The emotion may be reminiscence, freshness, romance, sexy, sadness, healing, relaxing, loneliness, touching, excitement, happiness, quietness, missing, and the like. The theme may be original soundtrack, cartoon, campus, game, 1970s, 1980s, 1990s, network song, KTV, classic, imitation, guitar, piano, instrumental music, child, list, 2000s and so on.
  • At step 230, a multimedia list is generated based on the play parameters.
  • In the embodiment, the multimedia matching the play parameters may be extracted from the multimedia library or the network data based on the play parameters extracted from the voice play request. For example, if the play parameters extracted from the voice play request are “English,” “Country” and “Songs,” songs matching both “English” and “Country” may be extracted from the music library to generate a song playlist.
  • In some alternative implementations of this embodiment, generating a list of multimedia based on the play parameter comprises: generating a to-be-played song playlist based on the play parameter and one or more of: time—contemporary popularity of the multimedia, a user portrait, and user preference feedback data.
  • In this implementation, both the user portrait and the user preference feedback data may be obtained based on the big data or the historical interaction data of the user. At this point, a personalized multimedia list that better matches the user preference may be filtered out by referring to the user portrait and preference feedback data input by the user based on the play parameters, so as to improve the relevance of the multimedia in the multimedia list.
  • At step 240, multimedia in the multimedia list is played, in response to a current timing meeting the scheduled play timing.
  • In this embodiment, the multimedia in the multimedia list may be played via a speaker of the terminal device in response to the terminal device monitoring that the current condition meets the scheduled play timing. For example, if the scheduled play timing extracted from the voice play request is “eight o'clock in the morning, ” the multimedia in the multimedia list may be played when the terminal device detects that the current time is eight o'clock in the morning.
  • When playing a multimedia list, a history playlist before the played multimedia list may be reserved so as to still be capable of returning to the content in the history playlist, when the user inputs the play request of “previous song.”
  • Optionally, at step 250, the method for playing multimedia described above may further include: feeding back response information of the voice play request to the user by voice.
  • In this implementation, a voice may be used to response to the play request to the user, so that the user can receive the feedback of the terminal device in a timely and convenient manner. For example, after receiving a voice play request by a user and generating a multimedia list, “alright” may be fed back to the user. Otherwise, if extracting the play parameters fails, “sorry, no relevant songs are found” is fed back to the user.
  • In some alternative implementations of this embodiment, the feeding back response information of the voice play request by voice to the user comprises: feeding back received instruction information by voice, in response to generating the multimedia list; feeding back finding no relevant songs by voice to the user, in response to any one of: no play parameter being extracted from the play voice request;
  • and a to-be-played song list not being generated based on the play parameters; and feeding back the multimedia requested by the user having no copyright by voice, in response to a multimedia library not having a multimedia version meeting the play parameter.
  • In this implementation, response information (such as “alright,” “no problem,” “OK”) of the voice play request may be fed back by voice to the user, in response to generating the multimedia list. Finding no relevant songs is fed back by voice to the user, in response to no play parameter being extracted from the play voice request; or to a to-be-played song list not being generated based on the play parameters. For example, if the play parameter in the voice play request for the user is “Eight Miles Incense of XX,” no multimedia in the multimedia library would satisfy the expression, as a result, “no relevant songs are found” is fed back. The multimedia requested by the user having no copyright is fed back by voice, in response to a multimedia library not having a multimedia version meeting the play parameter. For example, “the related song has no copyright” is fed back to the user.
  • The above-described embodiments of the present disclosure provide a method for playing multimedia, the method extracts a scheduled play timing and a play parameter based on the voice play request by the user, and play multimedia that meet a play parameter on the scheduled play timing, so that the played multimedia better meets the needs of the user, so as to enhance the accuracy and pertinence of the multimedia played to the user.
  • An exemplary application scenario of a method for playing multimedia of present disclosure is described below with reference to FIG. 3.
  • As shown in FIG. 3, FIG. 3 illustrates a schematic flowchart of an application scenario of a method for playing multimedia in accordance with the present disclosure.
  • As shown in FIG. 3, the method 300 for playing multimedia is operated in the smart speaker 320, and the method may include:
  • A voice play request 301 inputted by a user is first received.
  • A scheduled play timing 302 “next song” and a play parameter 303 “ABC” are then extracted from the voice play request 301: “play the next song ABC.”
  • A multimedia list 304 is then generated based on the play parameter 303 “ABC”: which may include a single song ABC, a cover version ABC, and similar songs.
  • Finally, a multimedia 305 in the multimedia list 304 is played, in response to a current timing being the current played song, meeting the scheduled play timing 302 “Next.”
  • It should be understood that the method for playing multimedia shown in FIG. 3 is merely an exemplary embodiment of the method for playing multimedia, and is not intended to limit the embodiments of the present disclosure. For example, after playing multimedia in the multimedia list 305 in response to a current timing meeting the scheduled play timing 302, response information of the voice play request may be fed back by voice to the user. For another example, generating a to-be-played song playlist based on the play parameter may comprise: generating the to-be-played song playlist based on the play parameter and one or more of: time—contemporary popularity of the multimedia, a portrait of the user, and feedback data of a preference of the user.
  • The method for playing multimedia provided in the above-described disclosure scenario of embodiments of the present disclosure can improve the accuracy and pertinence of the played multimedia.
  • Further referring to FIG. 4, as an implementation of the above method, the present disclosure provides an embodiment of an apparatus for playing multimedia, and the embodiment of the apparatus for playing multimedia corresponds to the embodiment of the method for playing multimedia shown in FIG. 1 to FIG. 3. Therefore, the operations and features described above with respect to the method for playing multimedia in FIG. 1 to FIG. 3 are also applicable to an apparatus for playing multimedia 400 and the units contained therein, and will not be reiterated here.
  • As shown in FIG. 4, the apparatus for playing multimedia 400 includes: a receiving unit 410 for receiving a voice play request inputted by a user; an extracting unit 420 for extracting a scheduled play timing and a play parameter from the voice play request; a generating unit 430 for generating a multimedia list based on the play parameter; and a playing unit 440 for playing multimedia in the multimedia list, in response to a current timing meeting the scheduled play timing.
  • In some embodiments, the scheduled play timing extracted by the extracting unit 420 include one or more of: a sorting position, a play time and a play scene of the multimedia.
  • In some embodiments, the play parameter extracted by the extracting unit 420 comprises one or more of following parameters of the multimedia: a name, a leading author, a thematic multimedia list, a list of interested multimedia, a language, a style, a scene, an emotion, and a theme.
  • In some embodiments, the apparatus 400 further comprises: a feedback unit 450 for feeding back response information of the voice play request by voice to the user.
  • In some embodiments, the generating unit 430 is further configured for: generating a to-be-played song playlist based on the play parameter and one or more of: time—contemporary popularity of the multimedia, a user portrait, and user preference feedback data.
  • In some embodiments, the feedback unit 450 is further configured for one or more of: feeding back received instruction information by voice, in response to generating the multimedia list; feeding back finding no relevant songs by voice to the user, in response to any one of: no play parameter being extracted from the play voice request; and a to-be-played song list not being generated based on the play parameters; and feeding back the multimedia requested by the user having no copyright by voice, in response to a multimedia library not having a multimedia version meeting the play parameter.
  • In some embodiments, the receiving unit 410 comprises: a wake-up subunit 411 for receiving a wake-up instruction inputted by the user; a feedback subunit 412 for feeding back response information by voice; and a receiving subunit 413 for receiving a voice play request inputted by the user.
  • The present disclosure also provides an apparatus comprising: one or more processors; a storage device for storing one or more programs; wherein the one or more processors implements the method for playing multimedia according to any of the foregoing, when the one or more programs are executed by the one or more processors.
  • The present disclosure further provides an embodiment of a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method for playing multimedia according to any of the foregoing.
  • Below refer to FIG. 5, which illustrates a structural diagram of a computer system 500 suitable for implementing a terminal device or a server of an embodiment of the present disclosure. The terminal device shown in FIG. 5 is merely an example, and should not impose any limitation on the function and the scope of use of the embodiments of the present disclosure.
  • As shown in FIG. 5, the computer system 500 includes a central processing unit (CPU) 501, which may execute various appropriate actions and processes in accordance with a program stored in a read-only memory (ROM) 502 or a program loaded into a random access memory (RAM) 503 from a storage portion 508. The RAM 503 also stores various programs and data required by operations of the system 500. The CPU 501, the ROM 502 and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to the bus 504.
  • The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse etc.; an output portion 507 comprising a cathode ray tube (CRT), a liquid crystal display device (LCD), a speaker etc.; a storage portion 508 including a hard disk and the like; and a communication portion 509 comprising a network interface card, such as a LAN card and a modem. The communication portion 509 performs communication processes via a network, such as the Internet. A driver 510 is also connected to the I/O interface 505 as required. A removable medium 511, such as a magnetic disk, an optical disk, a magneto-optical disk, and a semiconductor memory, may be installed on the driver 510, to facilitate the retrieval of a computer program from the removable medium 511, and the installation thereof on the storage portion 508 as needed.
  • In particular, according to embodiments of the present disclosure, the process described above with reference to the flow chart may be implemented in a computer software program. For example, an embodiment of the present disclosure includes a computer program product, which comprises a computer program that is tangibly embedded in a machine-readable medium. The computer program comprises program codes for executing the method as illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 509, and/or may be installed from the removable media 511. The computer program, when executed by the central processing unit (CPU) 501, implements the above mentioned functionalities as defined by the methods of the present disclosure.
  • It should be noted that the computer readable medium in the present disclosure maybe computer readable signal medium or computer readable storage medium or any combination of the above two. An example of the computer readable storage medium may include, but not limited to: electric, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, elements, or a combination any of the above. A more specific example of the computer readable storage medium may include but is not limited to: electrical connection with one or more wire, a portable computer disk, a hard disk, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory (EPROM or flash memory), a fibre, a portable compact disk read only memory (CD-ROM), an optical memory, a magnet memory or any suitable combination of the above. In the present disclosure, the computer readable storage medium may be any physical medium containing or storing programs which can be used by a command execution system, apparatus or element or incorporated thereto. In the present disclosure, the computer readable signal medium may include data signal in the base band or propagating as parts of a carrier, in which computer readable program codes are carried. The propagating signal may take various forms, including but not limited to: an electromagnetic signal, an optical signal or any suitable combination of the above. The signal medium that can be read by computer may be any computer readable medium except for the computer readable storage medium. The computer readable medium is capable of transmitting, propagating or transferring programs for use by, or used in combination with, a command execution system, apparatus or element. The program codes contained on the computer readable medium may be transmitted with any suitable medium including but not limited to: wireless, wired, optical cable, RF medium etc., or any suitable combination of the above.
  • The flow charts and block diagrams in the accompanying drawings illustrate architectures, functions and operations that may be implemented according to the systems, methods and computer program products of the various embodiments of the present disclosure. In this regard, each of the blocks in the flow charts or block diagrams may represent a module, a program segment, or a code portion, said module, program segment, or code portion comprising one or more executable instructions for implementing specified logic functions. It should also be noted that, in some alternative implementations, the functions denoted by the blocks may occur in a sequence different from the sequences shown in the figures. For example, any two blocks presented in succession may be executed, substantially in parallel, or they may sometimes be in a reverse sequence, depending on the function involved. It should also be noted that each block in the block diagrams and/or flow charts as well as a combination of blocks may be implemented using a dedicated hardware-based system executing specified functions or operations, or by a combination of a dedicated hardware and computer instructions.
  • The units or modules involved in the embodiments of the present disclosure may be implemented by means of software or hardware. The described units or modules may also be provided in a processor, for example, described as: a processor, comprising a receiving unit, an extracting unit, a generating unit and a paying unit, where the names of these units or modules do not in some cases constitute a limitation to such units or modules themselves. For example, the receiving unit may also be described as “a unit for receiving a voice play request inputted by a user.”
  • In another aspect, the present disclosure further provides a non-volatile computer-readable storage medium. The non-volatile computer-readable storage medium may be the non-volatile computer storage medium included in the apparatus in the above described embodiments, or a stand-alone non-volatile computer-readable storage medium not assembled into the apparatus. The non-volatile computer-readable storage medium stores one or more programs. The one or more programs, when executed by a device, cause the device to: receive a voice play request inputted by a user; extract a scheduled play timing and a play parameter from the voice play request; generate a multimedia list based on the play parameter; and play multimedia in the multimedia list, in response to a current timing meeting the scheduled play timing.
  • The above description only provides an explanation of the preferred embodiments of the present disclosure and the technical principles used. It should be appreciated by those skilled in the art that the inventive scope of the present disclosure is not limited to the technical solutions formed by the particular combinations of the above-described technical features. The inventive scope should also cover other technical solutions formed by any combinations of the above-described technical features or equivalent features thereof without departing from the concept of the disclosure. Technical schemes formed by the above-described features being interchanged with, but not limited to, technical features with similar functions disclosed in the present disclosure are examples.

Claims (15)

What is claimed is:
1. A method for playing multimedia, the method comprising:
receiving a voice play request inputted by a user;
extracting a scheduled play timing and a play parameter from the voice play request;
generating a multimedia list based on the play parameter; and
playing multimedia in the multimedia list, in response to a current timing meeting the scheduled play timing.
2. The method according to claim 1, wherein the scheduled play timing comprises one or more of: a sorting position, a play time and a play scene of the multimedia.
3. The method according to claim 1, wherein the play parameter comprises one or more of following parameters of the multimedia: a name, a leading author, a thematic multimedia list, a list of interested multimedia, a language, a style, a scene, an emotion, and a theme.
4. The method according to claim 1, the method further comprising:
feeding back response information of the voice play request by voice to the user.
5. The method according to claim 1, wherein generating a multimedia playlist based on the play parameter comprises:
generating the multimedia playlist based on the play parameter and one or more of: time—contemporary popularity of the multimedia, a portrait of the user, and feedback data of a preference of the user.
6. The method according to claim 4, wherein the feeding back response information of the voice play request by voice to the user comprises one or more of:
feeding back received instruction information by voice, in response to generating the multimedia list;
feeding back finding no relevant songs by voice to the user, in response to any one of: no play parameter being extracted from the play voice request; and a to-be-played song list not being generated based on the play parameters; and
feeding back the multimedia requested by the user having no copyright by voice, in response to a multimedia library not having a multimedia version meeting the play parameter.
7. The method according to claim 1, wherein the receiving a voice play request inputted by a user comprises:
receiving a wake-up instruction inputted by the user; and
feeding back response information by voice and receiving the voice play request inputted by the user.
8. An apparatus for playing multimedia, the apparatus comprising:
at least one processor; and
a memory storing instructions, the instructions when executed by the at least one processor, cause the at least one processor to perform operations, the operations comprising:
receiving a voice play request inputted by a user;
extracting a scheduled play timing and a play parameter from the voice play request;
generating a multimedia list based on the play parameter; and
playing multimedia in the multimedia list, in response to a current timing meeting the scheduled play timing.
9. The apparatus according to claim 8, wherein the scheduled play timing comprises one or more of: a sorting position, a play time and a play scene of the multimedia.
10. The apparatus according to claim 8, wherein the play parameter comprises one or more of following parameters of the multimedia: a name, a leading author, a thematic multimedia list, a list of interested multimedia, a language, a style, a scene, an emotion, and a theme.
11. The apparatus according to claim 8, wherein the operations further comprise:
feeding back response information of the voice play request by voice to the user.
12. The apparatus according to claim 8, wherein the generating unit being further configured for:
generating a multimedia playlist based on the play parameter and one or more of: time—contemporary popularity of the multimedia, a portrait of the user, and feedback data of a preference of the user.
13. The apparatus according to claim 12, wherein the feeding back response information of the voice play request by voice to the user comprise one or more of:
feeding back received instruction information by voice, in response to generating the multimedia list;
feeding back finding no relevant songs by voice to the user, in response to any one of: no play parameter being extracted from the play voice request; and a to-be-played song list not being generated based on the play parameters; and
feeding back the multimedia requested by the user having no copyright by voice, in response to a multimedia library not having a multimedia version meeting the play parameter.
14. The apparatus according to claim 8, wherein the receiving a voice play request inputted by a user comprises:
receiving a wake-up instruction inputted by the user;
feeding back response information by voice; and
receiving a voice play request inputted by the user.
15. Anon-transitory computer storage medium storing a computer program, the computer program when executed by one or more processors, causes the one or more processors to perform operations, the operations comprising:
receiving a voice play request inputted by a user;
extracting a scheduled play timing and a play parameter from the voice play request;
generating a multimedia list based on the play parameter; and
playing multimedia in the multimedia list, in response to a current timing meeting the scheduled play timing.
US15/858,538 2017-11-14 2017-12-29 Method and apparatus for playing multimedia Abandoned US20190147863A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711119577.4 2017-11-14
CN201711119577.4A CN107895016B (en) 2017-11-14 2017-11-14 Method and device for playing multimedia

Publications (1)

Publication Number Publication Date
US20190147863A1 true US20190147863A1 (en) 2019-05-16

Family

ID=61804343

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/858,538 Abandoned US20190147863A1 (en) 2017-11-14 2017-12-29 Method and apparatus for playing multimedia

Country Status (3)

Country Link
US (1) US20190147863A1 (en)
JP (1) JP2019091014A (en)
CN (1) CN107895016B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11164583B2 (en) 2019-06-27 2021-11-02 Baidu Online Network Technology (Beijing) Co., Ltd. Voice processing method and apparatus
CN114341830A (en) * 2019-08-28 2022-04-12 索尼互动娱乐股份有限公司 Context-based action suggestions
US11403343B2 (en) 2019-07-26 2022-08-02 Toyota Jidosha Kabushiki Kaisha Retrieval of video and vehicle behavior for a driving scene described in search text
CN114863926A (en) * 2022-03-28 2022-08-05 广州小鹏汽车科技有限公司 Vehicle control method, vehicle, server, and storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108737871B (en) * 2018-06-01 2020-12-25 深圳安麦思科技有限公司 Projection control method and system
CN108920657A (en) * 2018-07-03 2018-11-30 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN109344571A (en) * 2018-10-08 2019-02-15 珠海格力电器股份有限公司 Music processing method, music acquisition method and device and household appliance
CN110349599B (en) * 2019-06-27 2021-06-08 北京小米移动软件有限公司 Audio playing method and device
CN113360127B (en) * 2021-05-31 2023-05-23 富途网络科技(深圳)有限公司 Audio playing method and electronic equipment

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040064306A1 (en) * 2002-09-30 2004-04-01 Wolf Peter P. Voice activated music playback system
US6718308B1 (en) * 2000-02-22 2004-04-06 Daniel L. Nolting Media presentation system controlled by voice to text commands
US20090326949A1 (en) * 2006-04-04 2009-12-31 Johnson Controls Technology Company System and method for extraction of meta data from a digital media storage device for media selection in a vehicle
US20110046755A1 (en) * 2009-08-24 2011-02-24 Samsung Electronics Co., Ltd. Contents reproducing device and method
US20120265535A1 (en) * 2009-09-07 2012-10-18 Donald Ray Bryant-Rich Personal voice operated reminder system
US20140278419A1 (en) * 2013-03-14 2014-09-18 Microsoft Corporation Voice command definitions used in launching application with a command
US9405741B1 (en) * 2014-03-24 2016-08-02 Amazon Technologies, Inc. Controlling offensive content in output
US20160357864A1 (en) * 2015-06-05 2016-12-08 Apple Inc. Personalized music presentation templates
US20170103754A1 (en) * 2015-10-09 2017-04-13 Xappmedia, Inc. Event-based speech interactive media player
US20170169819A1 (en) * 2015-12-09 2017-06-15 Lenovo (Singapore) Pte. Ltd. Modifying input based on determined characteristics
US9734839B1 (en) * 2012-06-20 2017-08-15 Amazon Technologies, Inc. Routing natural language commands to the appropriate applications
US20170243576A1 (en) * 2016-02-22 2017-08-24 Sonos, Inc. Voice Control of a Media Playback System
US20170316782A1 (en) * 2010-02-25 2017-11-02 Apple Inc. User profiling for voice input processing
US20180190279A1 (en) * 2017-01-03 2018-07-05 Logitech Europe S.A Content streaming system
US10127908B1 (en) * 2016-11-11 2018-11-13 Amazon Technologies, Inc. Connected accessory for a voice-controlled device
US20190103103A1 (en) * 2017-10-03 2019-04-04 Google Llc Voice user interface shortcuts for an assistant application
US10318236B1 (en) * 2016-05-05 2019-06-11 Amazon Technologies, Inc. Refining media playback
US10380208B1 (en) * 2015-12-28 2019-08-13 Amazon Technologies, Inc. Methods and systems for providing context-based recommendations

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6643620B1 (en) * 1999-03-15 2003-11-04 Matsushita Electric Industrial Co., Ltd. Voice activated controller for recording and retrieving audio/video programs
JP2001354071A (en) * 2000-06-13 2001-12-25 Mazda Motor Corp Audio equipment for moving body
JP2004163590A (en) * 2002-11-12 2004-06-10 Denso Corp Reproducing device and program
JP4122947B2 (en) * 2002-11-28 2008-07-23 ヤマハ株式会社 Music information distribution device
JP2005300772A (en) * 2004-04-08 2005-10-27 Denso Corp Musical piece information introduction system
US20100049540A1 (en) * 2006-12-08 2010-02-25 Pioneer Corporation Content delivery apparatus, content reproducing apparatus, content delivery method, content reproducing method, content delivery program, content reproducing program, and recording medium
JP4924282B2 (en) * 2007-08-21 2012-04-25 日本電気株式会社 Mobile terminal and alarm sound selection method for the terminal
US8971546B2 (en) * 2011-10-14 2015-03-03 Sonos, Inc. Systems, methods, apparatus, and articles of manufacture to control audio playback devices
CN103187078A (en) * 2011-12-28 2013-07-03 上海博泰悦臻电子设备制造有限公司 Voice music control device
KR20130140423A (en) * 2012-06-14 2013-12-24 삼성전자주식회사 Display apparatus, interactive server and method for providing response information
CN102724309A (en) * 2012-06-14 2012-10-10 广东好帮手电子科技股份有限公司 Vehicular voice network music system and control method thereof
CN102831892B (en) * 2012-09-07 2014-10-22 深圳市信利康电子有限公司 Toy control method and system based on internet voice interaction
US9755605B1 (en) * 2013-09-19 2017-09-05 Amazon Technologies, Inc. Volume control
CN103686290A (en) * 2013-12-27 2014-03-26 乐视致新电子科技(天津)有限公司 Method and device for mobile communication terminal to control smart TV to play video with delay
JP6559417B2 (en) * 2014-12-03 2019-08-14 シャープ株式会社 Information processing apparatus, information processing method, dialogue system, and control program
CN104778959B (en) * 2015-03-23 2017-10-31 广东欧珀移动通信有限公司 Playing device control method and terminal
US9826306B2 (en) * 2016-02-22 2017-11-21 Sonos, Inc. Default playback device designation
CN106251866A (en) * 2016-08-05 2016-12-21 易晓阳 A kind of Voice command music network playing device

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6718308B1 (en) * 2000-02-22 2004-04-06 Daniel L. Nolting Media presentation system controlled by voice to text commands
US20040064306A1 (en) * 2002-09-30 2004-04-01 Wolf Peter P. Voice activated music playback system
US20090326949A1 (en) * 2006-04-04 2009-12-31 Johnson Controls Technology Company System and method for extraction of meta data from a digital media storage device for media selection in a vehicle
US20110046755A1 (en) * 2009-08-24 2011-02-24 Samsung Electronics Co., Ltd. Contents reproducing device and method
US20120265535A1 (en) * 2009-09-07 2012-10-18 Donald Ray Bryant-Rich Personal voice operated reminder system
US20170316782A1 (en) * 2010-02-25 2017-11-02 Apple Inc. User profiling for voice input processing
US9734839B1 (en) * 2012-06-20 2017-08-15 Amazon Technologies, Inc. Routing natural language commands to the appropriate applications
US20140278419A1 (en) * 2013-03-14 2014-09-18 Microsoft Corporation Voice command definitions used in launching application with a command
US9405741B1 (en) * 2014-03-24 2016-08-02 Amazon Technologies, Inc. Controlling offensive content in output
US20160357864A1 (en) * 2015-06-05 2016-12-08 Apple Inc. Personalized music presentation templates
US20170103754A1 (en) * 2015-10-09 2017-04-13 Xappmedia, Inc. Event-based speech interactive media player
US20170169819A1 (en) * 2015-12-09 2017-06-15 Lenovo (Singapore) Pte. Ltd. Modifying input based on determined characteristics
US10380208B1 (en) * 2015-12-28 2019-08-13 Amazon Technologies, Inc. Methods and systems for providing context-based recommendations
US20170243576A1 (en) * 2016-02-22 2017-08-24 Sonos, Inc. Voice Control of a Media Playback System
US9947316B2 (en) * 2016-02-22 2018-04-17 Sonos, Inc. Voice control of a media playback system
US10318236B1 (en) * 2016-05-05 2019-06-11 Amazon Technologies, Inc. Refining media playback
US10127908B1 (en) * 2016-11-11 2018-11-13 Amazon Technologies, Inc. Connected accessory for a voice-controlled device
US20180190279A1 (en) * 2017-01-03 2018-07-05 Logitech Europe S.A Content streaming system
US20190103103A1 (en) * 2017-10-03 2019-04-04 Google Llc Voice user interface shortcuts for an assistant application

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11164583B2 (en) 2019-06-27 2021-11-02 Baidu Online Network Technology (Beijing) Co., Ltd. Voice processing method and apparatus
US11403343B2 (en) 2019-07-26 2022-08-02 Toyota Jidosha Kabushiki Kaisha Retrieval of video and vehicle behavior for a driving scene described in search text
CN114341830A (en) * 2019-08-28 2022-04-12 索尼互动娱乐股份有限公司 Context-based action suggestions
EP4022455A4 (en) * 2019-08-28 2023-08-23 Sony Interactive Entertainment Inc. Context-based action suggestions
CN114863926A (en) * 2022-03-28 2022-08-05 广州小鹏汽车科技有限公司 Vehicle control method, vehicle, server, and storage medium

Also Published As

Publication number Publication date
JP2019091014A (en) 2019-06-13
CN107895016A (en) 2018-04-10
CN107895016B (en) 2022-02-15

Similar Documents

Publication Publication Date Title
CN107871500B (en) Method and device for playing multimedia
US20190147863A1 (en) Method and apparatus for playing multimedia
CN107918653B (en) Intelligent playing method and device based on preference feedback
US10643610B2 (en) Voice interaction based method and apparatus for generating multimedia playlist
US8521766B1 (en) Systems and methods for providing information discovery and retrieval
CN109036417B (en) Method and apparatus for processing voice request
WO2022198811A1 (en) Method and apparatus for music sharing, electronic device, and storage medium
CN108604233B (en) Media consumption context for personalized instant query suggestions
JP7171911B2 (en) Generate interactive audio tracks from visual content
US20150089368A1 (en) Searching within audio content
CN107844587B (en) Method and apparatus for updating multimedia playlist
US20160255025A1 (en) Systems, methods and computer readable media for communicating in a network using a multimedia file
US11657807B2 (en) Multi-tier speech processing and content operations
CN111274819A (en) Resource acquisition method and device
KR102506361B1 (en) Adjustment of overlapping processing of audio queries
CN111159535A (en) Resource acquisition method and device
CN117459776A (en) Multimedia content playing method and device, electronic equipment and storage medium
US11475887B2 (en) Systems and methods for aligning lyrics using a neural network
CN108062353A (en) Play the method and electronic equipment of multimedia file
US20230237991A1 (en) Server-based false wake word detection
US20240232256A9 (en) System and method for recognizing music based on auditory memory and generating playlists
WO2023158384A2 (en) Information processing method and apparatus, and device, storage medium and program
EP3792915A1 (en) Systems and methods for aligning lyrics using a neural network
CN117312654A (en) Recall method, recall device, recall terminal and storage medium
CN119960634A (en) Interaction method, electronic device, computer-readable storage medium, and program product

Legal Events

Date Code Title Description
AS Assignment

Owner name: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LU, GUANG;YE, SHIQUAN;LUO, XIAJUN;AND OTHERS;REEL/FRAME:044996/0769

Effective date: 20171225

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

AS Assignment

Owner name: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.;REEL/FRAME:056811/0772

Effective date: 20210527

Owner name: SHANGHAI XIAODU TECHNOLOGY CO. LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.;REEL/FRAME:056811/0772

Effective date: 20210527

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION