US20190348044A1 - Refrigerator and information display method thereof - Google Patents
Refrigerator and information display method thereof Download PDFInfo
- Publication number
- US20190348044A1 US20190348044A1 US16/476,008 US201716476008A US2019348044A1 US 20190348044 A1 US20190348044 A1 US 20190348044A1 US 201716476008 A US201716476008 A US 201716476008A US 2019348044 A1 US2019348044 A1 US 2019348044A1
- Authority
- US
- United States
- Prior art keywords
- speech
- display
- food
- processor
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title description 51
- 235000013305 food Nutrition 0.000 claims abstract description 393
- 238000003860 storage Methods 0.000 claims abstract description 141
- 230000015654 memory Effects 0.000 claims abstract description 84
- 238000004891 communication Methods 0.000 claims description 76
- 238000004458 analytical method Methods 0.000 claims description 40
- 238000013473 artificial intelligence Methods 0.000 claims description 35
- 238000004422 calculation algorithm Methods 0.000 claims description 24
- 238000012790 confirmation Methods 0.000 claims description 18
- 230000006870 function Effects 0.000 description 60
- 238000012549 training Methods 0.000 description 31
- 238000005516 engineering process Methods 0.000 description 16
- 230000004044 response Effects 0.000 description 16
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 14
- 235000013549 apple pie Nutrition 0.000 description 10
- 238000011156 evaluation Methods 0.000 description 10
- 239000003570 air Substances 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 239000003507 refrigerant Substances 0.000 description 7
- 238000010801 machine learning Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 239000007788 liquid Substances 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 239000002184 metal Substances 0.000 description 3
- 229910052751 metal Inorganic materials 0.000 description 3
- 230000003278 mimic effect Effects 0.000 description 3
- 230000005679 Peltier effect Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 150000002739 metals Chemical class 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 239000012080 ambient air Substances 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000001704 evaporation Methods 0.000 description 1
- 230000008020 evaporation Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 239000011810 insulating material Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011112 process operation Methods 0.000 description 1
- 238000005057 refrigeration Methods 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 238000007789 sealing Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 210000000225 synapse Anatomy 0.000 description 1
- 230000000946 synaptic effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 239000012780 transparent material Substances 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F25—REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
- F25D—REFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
- F25D29/00—Arrangement or mounting of control or safety devices
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F25—REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
- F25D—REFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
- F25D29/00—Arrangement or mounting of control or safety devices
- F25D29/005—Mounting of control devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/285—Memory allocation or algorithm optimisation to reduce hardware requirements
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F25—REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
- F25D—REFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
- F25D2400/00—General features of, or devices for refrigerators, cold rooms, ice-boxes, or for cooling or freezing apparatus not covered by any other subclass
- F25D2400/36—Visual displays
- F25D2400/361—Interactive visual displays
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F25—REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
- F25D—REFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
- F25D2500/00—Problems to be solved
- F25D2500/06—Stock management
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F25—REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
- F25D—REFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
- F25D2600/00—Control issues
- F25D2600/06—Controlling according to a predetermined profile
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F25—REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
- F25D—REFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
- F25D2700/00—Means for sensing or measuring; Sensors therefor
- F25D2700/04—Sensors detecting the presence of a person
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F25—REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
- F25D—REFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
- F25D2700/00—Means for sensing or measuring; Sensors therefor
- F25D2700/12—Sensors measuring the inside temperature
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Definitions
- the present disclosure relates to a refrigerator and an information display method thereof, more particularly, to a refrigerator capable of communicating with an external device and an information display method thereof.
- AI artificial intelligence
- refrigerators have been equipped with a display to display a temperature of a storage compartment and an operating mode of the refrigerator.
- Such a display not only enables a user to easily acquire image information using a graphic user interface, but also enables a user to intuitively input control commands using a touch panel.
- the new display is capable of receiving information as well as capable of displaying information.
- the new refrigerator includes a communication module for connecting to an external device (for example, a server connected to Internet).
- an external device for example, a server connected to Internet.
- Refrigerators may be connected to Internet through a communication module, acquire a variety of information from different servers and provide a variety of services based on the acquired information. For example, through Internet, refrigerators may provide a variety of services such as Internet shopping as well as providing information related to foods such as information of food and recipes of food.
- the refrigerator provides a variety of services to users through its display and communication modules.
- AI artificial intelligence
- the AI system is a computer system that implements human-level intelligence and that a machine learns, determines, and becomes intelligent by itself unlike rule-based smart systems.
- a recognition rate is more increased and user preferences are more accurately understood. Therefore, rule-based smart systems are gradually being replaced by deep-learning based AI systems.
- AI technology includes a machine learning (e.g., deep learning) and element technologies that utilize the machine learning.
- the machine learning is an algorithm technology for autonomously categorizing and learning characteristics of input data.
- Element technologies are technologies that utilize the machine learning and may include linguistic understanding, visual understanding, reasoning/prediction, knowledge representation, and motion control.
- Linguistic understanding is a technology for recognizing, applying/processing human languages/characters and includes natural language processing, machine translation, dialogue system, query response, speech recognition/synthesis, etc.
- Visual understanding is a technology for recognizing and processing objects in a manner similar to that of human vision and includes object recognition, object tracking, image searching, human recognition, scene understanding, space understanding, and image enhancement.
- Reasoning/prediction is a technology to determine information for logical reasoning and prediction and includes knowledge/probability-based reasoning, optimization prediction, preference-based planning, and recommendation.
- Knowledge representation is a technology for automating human experience information into knowledge data and includes knowledge building (data generation/categorization) and knowledge management (data utilization).
- Motion control is a technology for controlling autonomous driving of a vehicle and a motion of a robot and includes motion control (navigation, collision avoidance, driving), manipulation control (behavior control), etc.
- the present disclosure is directed to providing a refrigerator capable of receiving a command in a speech by using a speech recognition technology and capable of outputting a content in a speech.
- the present disclosure is directed to providing a refrigerator capable of providing Internet shopping based on a command in a speech.
- the present disclosure is directed to providing a refrigerator capable of identifying a plurality of users and providing a content appropriate for the identified user.
- a refrigerator including a microphone, a display configured to display information according to a speech input via the microphone, and a controller configured to, when a first speech including a food name is recognized, display a list including information on a food having a name corresponding to the food name and an identification mark identifying the food, on the display and configured to, when a second speech referring to at least one marks among the marks, display food information indicated by the mark contained in the second speech, on the display.
- the controller may overlap the list in a card type user interface, with a user interface of an application displayed on the display.
- the controller may execute an application providing information of food indicated by a mark contained in the second speech so as to display food information, which is provided from the application, on the display.
- the refrigerator may further include a communication circuitry, and when the first speech is recognized, the controller may transmit data of the first speech to a server via the communication circuitry, and when the communication circuitry receives information on the list transmitted from the server, the controller may display the list on the display.
- the refrigerator may further include a communication circuitry, and when the second speech is recognized, the controller may transmit data of the second speech to a server via the communication circuitry, and when the communication circuitry receives food information transmitted from the server, the controller may display the food information on the display.
- the refrigerator may further include a communication circuitry, and when the first speech is recognized, the controller may transmit data of the first speech to a server via the communication circuitry, and when the communication circuitry receives analysis information on the first speech transmitted from the server, the controller may transmit a command according to the analysis information to an application, which is related to performance of an operation according to the analysis information, so as to allow the application to display the list.
- the refrigerator may further include a communication circuitry, and when the second speech is recognized, the controller may transmit data of the second speech to a server via the communication circuitry, and when the communication circuitry receives analysis information on the second speech transmitted from the server, the controller may transmit a command according to the analysis information to an application, which is related to performance of an operation according to the analysis information, so as to allow the application to display the food information.
- the refrigerator may further include a speaker configured to, when the list is displayed on the display due to the recognition of the first speech, output a speech indicating that the list is displayed, and configured to, when the food information is displayed on the display due to the recognition of the second speech, output a speech indicating that the food information is displayed.
- a refrigerator including a microphone, a display configured to display information according to a speech input via the microphone, a speaker configured to output a speech, and a controller configured to, when a speech is input via the microphone, execute a command indicated by the input speech, on an application displayed on the display, and configured to, when the controller is not able to execute the command indicated by the input speech, on the application, execute the command on other application according to a priority that is pre-determined about applications related to the command.
- the controller When the controller is not able to execute the command indicated by the input speech, on the application, the controller may output a speech indicating that it is impossible to execute the command on the application, via the speaker.
- the controller may output a speech requesting confirmation of whether to execute the command on other application, via the speaker.
- the controller may execute a command indicated by the speech on an application indicated by the input answer.
- the controller may select an application, in which the command is to be executed, according to the priority that is pre-determined about applications related to the command, and execute the command on the selected application.
- the controller may execute the command indicated by the input speech, on the application displayed on the display.
- a refrigerator including a microphone, a display configured to display information according to a speech input via the microphone, a speaker configured to output a speech, and a controller configured to, when a speech is input via the microphone and it is needed to identify a user for executing a command indicated by the speech, display a pre-registered user on the display, and configured to output a speech requesting of selecting a user among the displayed users, via the speaker.
- the controller may execute the command indicated by the speech, according to the selected user, and display a result thereof on the display.
- the controller may display the pre-registered user and a mark identifying each user on the display.
- the controller may execute a command indicated by the speech, according to a user indicated by the mark, and display a result thereof on the display.
- the controller may execute a command indicated by the speech, according to a user contained in the speech, and display a result thereof on the display.
- the controller may execute a command indicated by the speech, according to a user indicated by speech data matching with the input speech among pre-stored speech data, and display a result thereof on the display.
- An aspect of the present disclosure may provide a refrigerator that receives a command in a speech by using a speech recognition technology and is capable of outputting a content in a speech.
- An aspect of the present disclosure may provide a refrigerator that provides Internet shopping based on a command through a speech.
- An aspect of the present disclosure may provide a refrigerator that identifies a plurality of users and providing a content appropriate for the identified user.
- a refrigerator including a storage compartment configured to store food, a temperature detector configured to detect an internal temperature of the storage compartment, a cooler configured to supply cool air to the storage compartment, a microphone configured to receive a speech, a display configured to display information, at least one processor configured to be electrically connected to the temperature detector, the microphone, and the display; and a memory configured to be electrically connected to the at least one processor.
- the memory may store at least one instructions configured to, when a first speech including a food name is recognized via the microphone, allow the processor to display a food list, which includes food information corresponding to the food name and an identification mark identifying the food information, on the display, and configured to, when a second speech referring to the identification mark is recognized via the microphone, allow the processor to display food at least one piece of purchase information corresponding to the identification mark, on the display.
- the memory may store at least one instructions configured to, when the first speech including the food name is input via the microphone, allow the processor to acquire the food name by recognizing the first speech using a learning network model, which is trained using an artificial intelligence algorithm, and configured to allow the processor to display the food list including the food information corresponding to the food name, on the display.
- the learning network model may be trained by using a plurality of speeches and a plurality of words corresponding to the plurality of speeches.
- the memory may store at least one instructions configured to, when the first speech including the food name is input via the microphone, allow the processor to acquire the food name and information of user uttering the first speech by recognizing the first speech using a learning network model, which is trained using an artificial intelligence algorithm, and configured to allow the processor to display the food list including the food information, which corresponds to the food name and is related to the user information, on the display.
- the learning network model may be trained by using a plurality of speeches and a plurality of pieces of user information corresponding to the plurality of speeches.
- the memory may store at least one instructions configured to, when the first speech is recognized, allow the processor to identify whether a food related to the food name is placed in the storage compartment, and configured to, when the food is placed in the storage compartment, allow the processor to display information indicating that the food is placed in the storage compartment, on the display.
- the memory may store at least one instructions configured to, when the first speech is recognized, allow the processor to identify whether a food related to the food name is placed in the storage compartment, and configured to, when the food is not placed in the storage compartment, allow the processor to display the food list including the food information corresponding to the food name, on the display.
- the memory may store at least one instructions configured to, when the first speech is recognized, allow the processor to overlap the food list in a card type user interface, with a user interface of an application displayed on the display.
- the memory may store at least one instructions configured to, when the second speech is recognized, allow the processor to execute an application providing information of food indicated by a mark contained in the second speech, so as to display food information, which is provided from the application, on the display.
- the refrigerator may further include a communication circuitry, and the memory may store at least one instructions configured to, when the first speech is recognized, allow the processor to transmit data of the first speech to a server via the communication circuitry, and configured to, when the communication circuitry receives information on the food list transmitted from the server, allow the processor to display the food list on the display.
- the refrigerator may further include a communication circuitry, and the memory may store at least one instructions configured to, when the second speech is recognized, allow the processor to transmit data of the second speech to a server via the communication circuitry, and configured to, when the communication circuitry receives food information transmitted from the server, allow the processor to display the food information on the display.
- the refrigerator may further include a communication circuitry
- the memory may store at least one instructions configured to, when the first speech is recognized, allow the processor to transmit data of the first speech to a server via the communication circuitry, and configured to, when the communication circuitry receives analysis information on the first speech transmitted from the server, allow the processor to transmit a command according to the analysis information to an application, which is related to performance of an operation according to the analysis information, so as to allow the application to display the food list.
- the refrigerator may further include a communication circuitry
- the memory may store at least one instructions configured to, when the second speech is recognized, allow the processor to transmit data of the second speech to a server via the communication circuitry, and configured to, when the communication circuitry receives analysis information on the second speech transmitted from the server, allow the processor to transmit a command according to the analysis information to an application, which is related to performance of an operation according to the analysis information, so as to allow the application to display the food information.
- the refrigerator may further include a speaker
- the memory may store at least one instructions configured to, when the food list is displayed on the display due to the recognition of the first speech, allow the processor to output a speech indicating that the list is displayed, via the speaker, and configured to, when the food information is displayed on the display due to the recognition of the second speech, allow the processor to output a speech indicating that the food information is displayed, via the speaker.
- a refrigerator including a storage compartment configured to store food, a temperature detector configured to detect an internal temperature of the storage compartment, a cooler configured to supply cool air to the storage compartment, a microphone configured to receive a speech, a display configured to display information, at least one processor configured to be electrically connected to the temperature detector, the microphone, and the display, and a memory configured to be electrically connected to the at least one processor.
- the memory may store at least one instructions configured to, when a speech is input via the microphone, allow the processor to execute a command indicated by the input speech, on an application displayed on the display, and configured to, when the processor is not able to execute the command indicated by the input speech, on the application, allow the processor to execute the command on other application according to a priority that is pre-determined about applications related to the command.
- the refrigerator may further include a speaker, and the memory may store at least one instructions configured to, when the processor is not able to execute the command indicated by the input speech, on the application, allow the processor to output a speech indicating that it is impossible to execute the command on the application, via the speaker.
- the refrigerator may further include a speaker, and the memory may store at least one instructions configured to, when the priority is not present, allow the processor to output a speech requesting confirmation of whether to execute the command on other application, via the speaker.
- the memory may store at least one instructions configured to, when an answer to the request of the confirmation is input, allow the processor to execute a command indicated by the speech on an application indicated by the input answer.
- the memory may store at least one instructions configured to, when the processor is not able to execute the command indicated by the input speech, on the application, allow the processor to select an application, in which the command is to be executed, according to the priority that is pre-determined about applications related to the command, and configured to allow the processor to execute the command on the selected application.
- the memory may store at least one instructions configured to, when a target, in which the command indicated by the speech is to be executed, is not contained in the speech input via the microphone, to allow the processor to execute the command indicated by the input speech, on the application displayed on the display.
- a refrigerator including a storage compartment configured to store food, a temperature detector configured to detect an internal temperature of the storage compartment, a cooler configured to supply cool air to the storage compartment, a microphone configured to receive a speech, a display configured to display information, a speaker configured to output a speech, at least one processor configured to be electrically connected to the temperature detector, the microphone, and the display, and a memory configured to be electrically connected to the at least one processor.
- the memory may store at least one instructions configured to, when a speech is input via the microphone and it is needed to identify a user for executing a command indicated by the speech, allow the processor to display a pre-registered user on the display, and configured to output a speech requesting of selecting a user among the displayed user, via the speaker.
- the memory may store at least one instructions configured to, when a speech selecting at least one user among the displayed users is input, allow the processor to execute the command indicated by the speech, according to the selected user, and configured to allow the processor to display a result thereof on the display.
- the memory may store at least one instructions configured to, when it is needed to identify a user for executing a command indicated by the speech, allow the processor to display the pre-registered user and a mark identifying each user on the display.
- the memory may store at least one instructions configured to, when a speech selecting at least one marks among the marks is input, allow the processor to execute a command indicated by the speech, according to a user indicated by the mark, and configured to allow the processor to display a result thereof on the display.
- the memory may store at least one instructions configured to, when a speech is input via the microphone and an expression indicating a user is contained in the speech, allow the processor to execute a command indicated by the speech, according to a user contained in the speech, and configured to allow the processor to display a result thereof on the display.
- the memory may store at least one instructions configured to, when a speech is input via the microphone, allow the processor to execute a command indicated by the speech, according to a user indicated by speech data matching with the input speech among pre-stored speech data, and configured to allow the processor to display a result thereof on the display.
- Another aspect of the present disclosure provides an information display method of a refrigerator including receiving a first speech including a food name via a microphone, displaying a food list including food information corresponding to the food name and an identification mark identifying the food information, on a display installed on the front surface of the refrigerator, based on the recognition of the first speech, receiving a second speech referring to the identification mark via the microphone, and displaying at least one piece of purchase information of food corresponding to the identification mark, on the display, based on the recognition of the second speech.
- FIG. 1 is a view of an appearance of a refrigerator according to one embodiment of the present disclosure.
- FIG. 2 is a front view of the refrigerator according to one embodiment of the present disclosure.
- FIG. 3 is a block diagram of a configuration of the refrigerator according to one embodiment of the present disclosure.
- FIG. 4 is a view of a display included in the refrigerator according to one embodiment of the present disclosure.
- FIG. 5 is a view of a home screen displayed on the display included in the refrigerator according to one embodiment of the present disclosure.
- FIGS. 6 and 7 are views of a speech recognition application displayed on the display included in the refrigerator according to one embodiment of the present disclosure.
- FIG. 8 is a view illustrating a communication with an external device through a communication circuitry included in the refrigerator according to one embodiment of the present disclosure.
- FIGS. 9 and 10 are views illustrating a communication between a server and the refrigerator according to one embodiment of the present disclosure.
- FIGS. 11A to 14 are views illustrating a method for a user to purchase food through the refrigerator according to one embodiment of the present disclosure.
- FIGS. 15 and 16 are views illustrating a method in which the refrigerator according to one embodiment of the present disclosure performs a command in accordance with a speech command.
- FIGS. 17 to 20 are views illustrating a method in which the refrigerator according to one embodiment of the present disclosure identifies a user and provides information corresponding to the identified user.
- FIG. 21A is a block diagram of a controller for data training and recognition according to one embodiment of the present disclosure.
- FIG. 21B is a detail block diagram of a data learner and a data recognizer according to one embodiment of the present disclosure.
- FIG. 22 is a flow chart of the refrigerator displaying information according to one embodiment of the present disclosure.
- FIG. 23 is a flow chart of a network system according to one embodiment of the present disclosure.
- FIG. 24 is a view illustrating a method in which a user purchases food via a refrigerator according to another embodiment of the present disclosure.
- terms such as “unit”, “part”, “block”, “member” and “module” may indicate a unit for processing at least one function or operation.
- the terms may indicate at least one process processed by at least one hardware such as Field Programmable Gate Array (FPGA), and Application Specific Integrated Circuit (ASIC), at least one software stored in a memory or a processor.
- FPGA Field Programmable Gate Array
- ASIC Application Specific Integrated Circuit
- the term “food” is used to refer to industrial products manufactured by humans or machines or products produced or hunted by a user.
- FIG. 1 is a view of an appearance of a refrigerator according to one embodiment of the present disclosure
- FIG. 2 is a front view of the refrigerator according to one embodiment of the present disclosure
- FIG. 3 is a block diagram of a configuration of the refrigerator according to one embodiment of the present disclosure
- FIG. 4 is a view of a display included in the refrigerator according to one embodiment of the present disclosure.
- a refrigerator 1 may include a body 10 including an open front surface, and a door 30 opening and closing the open front surface of the body 10 .
- the body 10 is provided with a storage compartment 20 having an open front surface to keep food in a refrigeration manner or a frozen manner.
- the body 10 may form an outer appearance of the refrigerator 1 .
- the body 10 may include an inner case 11 forming a storage compartment 20 and an outer case 12 forming an appearance of the refrigerator by being coupled to the outside of the inner case 11 .
- a heat insulating material (not shown) may be filled between the inner case 11 and the outer case 12 of the body 10 to prevent leakage of the cooling air of the storage compartment 20 .
- the storage compartment 20 may be divided into a plurality of storage compartments by a horizontal partition 21 and a vertical partition 22 .
- the storage compartment 20 may be divided into an upper storage compartment 20 a , a lower first storage compartment 20 b , and a lower second storage compartment 20 c .
- a shelf 23 on which food is placed, and a closed container 24 in which food is stored in a sealing manner may be provided in the storage compartment 20 .
- the storage compartment 20 may be opened and closed by the door 30 .
- the upper storage compartment 20 a may be opened and closed by an upper first door 30 aa and an upper second door 30 ab .
- the lower first storage compartment 20 b may be opened and closed by a lower first door 30 b
- the lower second storage compartment 20 c may be opened and closed by a lower second door 30 c.
- the door 30 may be provided with a handle 31 easily opening and closing the door 30 .
- the handle 31 may be elongated in the vertical command between the upper first door 30 aa and the upper second door 30 ab , and between the lower first door 30 b and the lower second door 30 c . Therefore, when the door 30 is closed, the handle 31 may be seen as being integrally provided.
- the refrigerator 1 may include at least one of a display 120 , a storage 130 , a communication circuitry 140 , a dispenser 150 , a cooler 150 , a temperature detector 170 , a audio 180 , and a main controller 110 .
- the display 120 may interact with a user.
- the display 120 may receive user input from a user and display an image according to the received user input.
- the display 120 may include a display panel 121 displaying an image, a touch panel 122 receiving user input and a touch panel controller 123 controlling/driving the display panel 121 and the touch panel 122 .
- the display panel 121 may convert image data received from the main controller 110 into an optical image, which is viewed by a user, through the touch screen controller 123 .
- the display panel 101 may employ cathode ray tube (CRT) display panel, liquid crystal display (LCD) panel, light emitting diode (LED) panel, organic light emitting diode (OLED) panel, plasma display panel (PDP), and field emission display (FED) panel.
- CTR cathode ray tube
- LCD liquid crystal display
- LED light emitting diode
- OLED organic light emitting diode
- PDP plasma display panel
- FED field emission display
- the display panel 101 is not limited thereto, and the display panel 101 may employ various display means visually displaying an optical image corresponding to image data.
- the touch panel 122 may receive a user's touch input and transmit an electrical signal corresponding to the received touch input to the touch screen controller 123 .
- the touch panel 122 detects a user touch on the touch panel 122 and transmits an electrical signal corresponding to coordinates of the user touch point, to the touch screen controller 123 .
- the touch screen controller 123 may acquire the coordinates of the user touch point based on the electrical signals received from the touch panel 122 .
- the touch panel 122 may be located on the front surface of the display panel 121 .
- the touch panel 122 may be provided on a surface on which an image is displayed. Accordingly, the touch panel 122 may be made of a transparent material to prevent the image displayed on the display panel 121 from being distorted.
- the touch panel 122 may employ a resistance film type touch panel or a capacitive touch panel.
- the touch panel 122 is not limited thereto, and thus the touch panel may employ a variety of input means detecting the user's touch or approach, and outputting an electrical signal corresponding to coordinates of the detected touch point or coordinates of approach point.
- the touch screen controller 123 may drive/control an operation of the display panel 121 and the touch panel 122 . Particularly, the touch screen controller 123 may drive the display panel 121 to allow the display panel 121 to display an optical image corresponding to the image data received from the main controller 110 , and control the touch panel 122 to allow the touch panel to detect the coordinates of the user touch point.
- the touch screen controller 123 may identify the coordinates of the user touch point based on the electrical signal output from the touch panel 122 , and transmit the identified coordinates of the user touch point to the main controller 110 .
- the touch screen controller 123 may transmit the electrical signal output from the touch panel 122 to the main controller 110 to allow the main controller 110 to identify the coordinates of the user touch point.
- the touch-screen controller 123 may include a memory (not shown) storing programs and data for controlling an operation of the display panel 121 and the touch panel 122 , and a microprocessor (not shown) performing an operation for controlling the operation of the touch panel 121 and the touch panel 122 according to the program and data stored in the memory.
- the memory and the processor of the touch screen controller 123 may be provided as a separate chip or a single chip.
- the display 120 may be installed in the door 30 for the user's convenience.
- the display 120 may be installed in the upper second door 30 ab .
- the display 120 installed on the upper second door 30 ab will be described.
- the position of the display 120 is not limited to the upper second door 30 ab .
- the dispenser 150 may be installed on any position as long as a user can see, such as the upper first door 30 aa , the lower first door 30 b , the lower second door 30 c , and the outer case 12 of the body 10 .
- the display 120 may be provided with a wake up function that is automatically activated when a user approaches within a predetermined range. For example, when the user approaches within the predetermined range, the display 120 may be activated. In other words, the display 120 may be turned on. On the other hand, when the user is outside the predetermined range, the display 120 may be inactivated. In other words, the display 120 may be turned off.
- the display 120 may display various screens or images. Screens or images displayed on the display 120 will be described in detail below.
- the storage 130 may store control programs and control data for controlling the operation of the refrigerator 1 and various application programs and application data for performing various functions according to user input.
- the storage 130 may include an operating system (OS) program for managing the configuration and resources (software and hardware) included in the refrigerator 1 , an image display application for displaying images stored in advance, a video play application for playing video stored in advance, a much application for playing a music, a radio application for playing a radio, a calendar application for managing a schedule, a memo application for storing a memo, an on-line shopping mall application for purchasing food on line, and a recipe application for providing a recipe.
- OS operating system
- the storage 130 may include non-volatile memory that does not lose program or data even when power is turned off.
- the storage 130 may include large capacity flash memory or solid state drive (SSD) 131 .
- the communication circuitry 140 may transmit data to an external device or receive data from an external device under the control of the main controller 110 .
- the communication circuitry 140 may include at least one of communication modules 141 , 142 , and 143 transmitting and receiving data according to a predetermined communication protocol.
- the communication circuitry 140 may include a WiFi (Wireless Fidelity: WiFi®) module 141 connecting to a local area network (LAN) via an access point, a Bluetooth (Bluetooth®) module 142 communicating with an external device in a one to one relationship or communicating with a small number of external device in a one to many relationship, and a ZigBee module 143 forming a LAN among a plurality of electronic device (mainly home appliances).
- WiFi Wireless Fidelity: WiFi®
- LAN local area network
- Bluetooth Bluetooth®
- ZigBee module 143 forming a LAN among a plurality of electronic device (mainly home appliances).
- the plurality of communication modules 141 , 142 , and 143 may include an antenna transmitting and receiving radio signal to and from the free space, and a modulator/demodulator modulating data to be transmitted or demodulating radio signals received.
- the dispenser 150 may discharge water or ice according to the user's input. In other words, the user may directly take out water or ice to the outside without opening the door 30 through the dispenser 150 .
- the dispenser 150 may include a dispenser lever 151 receiving a user's discharge command, a dispenser nozzle 152 discharging water or ice, a flow path 153 guiding water from an external water source to the dispenser nozzle 152 , a filter 154 purifying water being discharged, and a dispenser display panel 155 displaying an operation state of the dispenser 150 .
- the dispenser 150 may be installed outside the door 30 or the body 10 .
- the dispenser 150 may be installed in the upper first door 30 aa .
- the dispenser 150 installed in the upper first door 30 aa will be described.
- the position of the dispenser 150 is not limited to the upper first door 30 aa . Therefore, the dispenser 150 may be installed on any position as long as a user can take out water or ice, such as the upper second door 30 ab , the lower first door 30 b , the lower second door 30 c , and the outer case 12 of the body 10 .
- the door 30 or the outer case 12 may be provided with a cavity 150 a recessed inward of the refrigerator to form a space for taking out water or ice, and the cavity 150 a may be provided with the dispenser nozzle 152 and the dispenser lever 151 .
- the dispenser lever 151 When the user pushes the dispenser lever 151 , water or ice is discharged from the dispenser nozzle 152 .
- water when water is discharged through the dispenser nozzle 152 , water may flow from the external water source (not shown) to the dispenser nozzle 152 along the flow path 152 . Further, the water may be purified by the filter 153 while flowing to the dispenser nozzle 152 .
- the filter 153 may be detachably installed in the body 10 or the door 30 and thus when the filter 153 is broken down, the filter 153 may be replaced by a new filter.
- the cooler 160 may supply cool air to the storage compartment 20 .
- the cooler 160 may maintain the temperature of the storage compartment 20 within a predetermined range by using evaporation of the refrigerant.
- the cooler 160 may include a compressor 161 compressing gas refrigerant, a condenser 162 converting the compressed gas refrigerant into liquid refrigerant, an expander 163 reducing the pressure of the liquid refrigerant, and an evaporator 164 converting the pressurized liquid refrigerant into a gaseous state.
- the cooler 160 may supply cool air to the storage compartment 20 by using the phenomenon that the decompressed liquid refrigerant absorbs the thermal energy of the ambient air while being converted into the gas state.
- the configuration of the cooler 160 is not limited to the compressor 161 , the condenser 162 , the expander 163 and the evaporator 164 .
- the cooler 160 may include a Peltier element using Peltier effect.
- the Peltier effect means that heat is generated in one of the metals and heat is absorbed in the other metal when a current flows through a contact surface where the metals are in contact with each other.
- the cooler 160 may supply cool air to the storage compartment 102 using a Peltier element.
- the cooler 160 may include a magnetic cooler using magnetocaloric effect.
- the magnetocaloric effect means that when a certain substance (magnetocaloric material) is magnetized, it releases heat, and when a certain substance (magnetocaloric material) is demagnetized, it absorbs heat.
- the cooler 160 may supply cool air to the storage compartment 20 using a magnetic cooler.
- the temperature detector 170 may be placed inside the storage compartment 20 to detect the temperature inside the storage compartment 20 .
- the temperature detector 170 may include a plurality of temperature sensors 171 installed in the plurality of storage compartments 20 a , 20 b , and 20 c , respectively.
- each of the plurality of temperature sensors 171 may include a thermistor in which an electrical resistance varies with temperature.
- the audio 180 may include a speaker 181 converting an electrical signal received from the main controller 110 into an acoustic signal and outputting the acoustic signal, and a microphone 182 converting the acoustic signal into an electrical signal, and outputting the electrical signal into the main controller 110 .
- the main controller 110 may control the display 120 , the storage 130 , the communication circuitry 140 , the dispenser 150 , the cooler 160 , the temperature detector 170 and the audio 180 contained in the refrigerator 1 .
- the main controller 110 may include a microprocessor 111 performing operations to control the refrigerator 1 and a memory 112 storing/memorizing programs and data related to the performance of the operation of the microprocessor 111 .
- the microprocessor 111 may load data stored/memorized in the memory 112 according to the program stored in the memory 112 and may perform an arithmetic operation or a logical operation on the loaded data. Further, the microprocessor 111 may output the result of the arithmetic operation or the logical operation to the memory 112 .
- the memory 112 may include a volatile memory that loses stored data when the power supply is stopped.
- the volatile memory may load programs and data from the above-described storage 130 and temporarily memorize the loaded data.
- the volatile memory may provide the memorized program and data to the microprocessor 111 , and memorize the data output from the microprocessor 111 .
- These volatile memories may include S-RAM, and D-RAM.
- the memory 112 may include a non-volatile memory as needed.
- the non-volatile memory may preserve the memorized data when the power supply is stopped.
- the non-volatile memory may store firmware for managing and initializing various components contained in the refrigerator 1 .
- the non-volatile may include Read Only Memory (ROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM) and flash memory.
- the main controller 110 may include a plurality of microprocessors 111 and a plurality of memories 112 .
- the main controller 110 may include a first microprocessor and a first memory controlling the temperature detector 170 , the dispenser 150 and the cooler 160 of the refrigerator 1 .
- the main controller 110 may include a second microprocessor and a second memory controlling the display 120 , the storage 130 , the communication circuitry 140 , and the audio 180 of the refrigerator 1 .
- microprocessor 111 and the memory 112 are functionally distinguished, the microprocessor 111 and the memory 112 may be not physically distinguished from each other.
- the microprocessor 111 and the memory 112 may be implemented as separate chips or a single chip.
- the main controller 110 may control the overall operation of the refrigerator 1 , and it may be assumed that the operation of the refrigerator 1 described below is performed under the control of the main controller 110 .
- the main controller 110 , the storage 130 and the communication circuitry 140 are functionally distinguished from each other in the above description, the main controller 110 , the storage 130 , and the communication circuitry 140 may be not physically distinguished from each other.
- the main controller 110 , the storage 130 , and the communication circuitry 140 may be implemented as separate chips or a single chip.
- the display 120 the storage 130 , the communication circuitry 140 , the dispenser 150 , the cooler 150 , the temperature detector 170 , the audio 180 and the main controller 180 contained in the refrigerator 1 have been described, but a new configuration may be added or some configuration may be omitted as needed.
- FIG. 5 is a view of a home screen displayed on the display included in the refrigerator according to one embodiment of the present disclosure.
- the main controller 110 may allow the display 120 to display a home screen 200 as illustrated in FIG. 5 .
- a time/date area 210 displaying a time and date, an operational information area 220 displaying operation information of the refrigerator 1 , and a plurality of launchers 230 for executing applications stored in the storage 130 may be displayed on the home screen 200 .
- Time/date area 210 Current time information and today's date information may be displayed on the time/date area 210 . Further, location information (e.g., name of country or city) of the location where the refrigerator 1 is located may be displayed on the time/date area 210 .
- location information e.g., name of country or city
- a storage compartment map 221 related to the operation of the plurality of storage compartments 20 a , 20 b , and 20 c contained in the refrigerator 1 may be displayed on the operation information area 220 .
- Information related to the operation of the plurality of storage compartments 20 a , 20 b , and 20 c contained in the refrigerator 1 may be displayed on the storage compartment map 221 .
- the upper storage compartment 20 a , the lower first storage compartment 20 b , and the lower second storage compartment 20 c may be partitioned and displayed on the storage compartment map 221 , and a target temperature of the first lower storage compartment 20 a , a target temperature of the lower first storage compartment 20 b , and a target temperature of the lower second storage compartment 20 c may be displayed on the storage compartment map 221 .
- the main controller 110 may display an image for indicating the target temperature of each storage compartment 20 a , 20 b , and 20 c on the display 120 .
- an image for setting the target temperature of the upper storage compartment 20 a may be displayed on the display 120 .
- a timer setting icon 222 and a refrigerator setting icon 223 for executing an application controlling the operation of the refrigerator 1 may be displayed on the operational information area 220 .
- a timer setting screen for setting a target time of the timer may be displayed on the display 120 .
- the user can input a time at which an alarm will be output or a time interval until an alarm will be output, via the timer setting image.
- the refrigerator 1 may output the alarm at the time input by the user, or the refrigerator 1 may output the alarm at the time when the time interval input by the user elapses.
- the main controller 110 may display an operation setting screen for inputting a setting value for controlling the operation of the refrigerator 1 , on the display 120 .
- the user via the operation setting screen, the user can set the target temperature of each of the plurality of storage compartments 20 a , 20 b , and 20 c contained in the refrigerator 1 , and set the selection between the water and the ice to be discharged through the dispenser 150 .
- the plurality of launchers 230 for executing applications stored in the storage 130 may be displayed on the home screen 200 .
- an album launcher 231 for executing an album application displaying pictures stored in the storage 130 a recipe launcher 232 for executing an recipe application providing food recipes, and a screen setting launcher 233 for executing a screen setting application controlling the operation of the display 120 may be displayed on the home screen 200 .
- a home appliance control launcher 234 for executing a home appliance control application controlling various home appliances through the refrigerator 1 a speech output setting launcher 235 for setting an operation of a speech output application outputting various contents in a speech, and an online shopping launcher 236 for executing a shopping application for online shopping may be displayed on the home screen 200 .
- main information related to the operation of the refrigerator 1 and launchers for executing the variety of applications may be displayed.
- FIG. 5 is merely an example of the home screen 200 , and thus the refrigerator 100 may display various types of home screen according to the user's settings. Further, information and launchers displayed on the home screen are not limited to FIG. 5 .
- FIG. 6 illustrates a user interface (UI) of a speech recognition displayed on the display 120 of the refrigerator according to one embodiment of the present disclosure.
- UI user interface
- the main controller is referred to as a controller for convenience.
- the controller 110 may display a speech recognition user interface (UI) 250 on the display 120 based on a predetermined wake word being input via the microphone 182 .
- the speech recognition UI 250 having a configuration illustrated in FIG. 6 may be displayed on the display 120 .
- the wake word may be composed of a predetermined word or a combination of words, and may be changed to an expression desired by the user.
- the controller 110 may execute a speech recognition function using a wake word uttered by the user, as described above. Alternatively, as illustrated in FIG. 7 , the controller 110 may execute a speech recognition function when a microphone-shaped button 271 displayed on a notification UI is touched.
- the shape of the launcher indicating a button for executing the speech recognition function is not limited to the shape of the microphone and thus may be implemented in a variety of images.
- the notification UI may be displayed on the display 120 as illustrated in FIG. 7 , when a touch gesture is input.
- the touch gesture represents swiping upward while a specific point at the bottom of the display 120 is touched.
- the launcher for executing the speech recognition function may be displayed on the notification UI or the home screen 200 of the display 120 .
- the controller 110 may identify the user intention by recognizing and analyzing the user speech composed of natural language by using the speech recognition function, and may perform a command according to the user intention. Further, by outputting a process or result of a command according to a user intention, as a speech, the controller 110 may allow the user to acoustically recognize the process or the result of the speech command.
- the speech recognition UI 250 may be displayed on the display 120 , as illustrated in FIGS. 6 and 7 .
- the speech recognition UI 250 may include a first area 252 on which a word or a sentence, which is output as a speech by using a text to speech (TTS) function, is displayed, a second area 254 on which a state of the speech recognition function is displayed, a third area 260 indicating that a speech is outputting through the TTS function, a setting object 256 for setting the speech recognition function and a help object 258 providing a help for the use of the speech recognition function.
- TTS text to speech
- the second area 254 on which a state of the speech recognition function is displayed may display a state in which the user speech is inputting, a state for waiting the input of the user speech, or a state in which a process according to the input of the user speech is processing, in a text manner.
- the state in which the user speech is inputting may be displayed as “listening”
- the state for waiting may be displayed as “standby”
- the state in which the process is processing may be displayed as “processing”.
- the above-mentioned English text is merely an example and thus the state may be displayed in Korean text or may be displayed as an image rather than text.
- the speech recognition UI 250 may be displayed as a card type UI that is pop-up on the home screen 200 or displayed on an entire screen of the display 120 .
- the controller 110 may transmit speech data to a first server SV 1 described later, and execute a command by receiving information, which is acquired by analyzing the speech, from the first server SV 1 .
- FIG. 8 is a view illustrating a communication with an external device through a communication circuitry included in the refrigerator according to one embodiment of the present disclosure
- FIGS. 9 and 10 are views illustrating a communication between a server and the refrigerator according to one embodiment of the present disclosure.
- the refrigerator 1 may communicate with various electronic devices as well as the servers SV 1 and SV 2 through the communication circuitry 140 .
- the refrigerator 1 may be connected to an access point (AP) via the communication circuitry 140 .
- the refrigerator 1 may be connected to the access point (AP) by using a wireless communication standard such as Wi-FiTM (IEEE 802.11), BluetoothTM (IEEE 802.15.1) and Zigbee (IEEE 802.15.4).
- Wi-FiTM IEEE 802.11
- BluetoothTM IEEE 802.15.1
- Zigbee IEEE 802.15.4
- the AP may be referred to as “hub”, “router”, “switch”, or “gateway” and the AP may be connected to a wide area network (WAN).
- WAN wide area network
- various electronic devices such as an air conditioner 2 , a washing machine 3 , an oven 4 , a microwave oven 5 , a robot cleaner 6 , a security camera 7 , a light 8 and a television 9 may be connected to the AP.
- the electronic devices 1 - 9 connected to the AP may form a local area network (LAN).
- LAN local area network
- the AP may connect the LAN formed by the electronic devices 1 - 9 connected to the AP, to the WAN such as Internet.
- the first server SV 1 providing information, which is acquired by analyzing speech data, to the refrigerator, and the second server SV 2 , which is operated by a provider providing information through an application installed in the refrigerator, may be connected to the WAN.
- the second server SV 2 may include a server (hereinafter referred to as “store server”) selling food online via a shopping application such as a store application provided in the refrigerator, a server (hereinafter referred to as “weather server”) providing information to a weather application, a server (hereinafter referred to as “recipe server”) providing information on recipes, to a recipe application, and a server (hereinafter referred to as “music server”) providing information on music, to a music application.
- store server selling food online via a shopping application such as a store application provided in the refrigerator
- weather server providing information to a weather application
- recipe server providing information on recipes, to a recipe application
- music server hereinafter referred to as “music server” providing information on music, to a music application.
- a mobile terminal may be connected to the WAN.
- the MT may be directly connected to the WAN or may be connected to the WAN through the AP according to the location of the MT. For example, when the MT is located close to the AP, the MT may be connected to the WAN through the AP. When the MT is located far from the AP, the MT may be directly connected to the WAN through a mobile communication service provided by a mobile communication service provider.
- the refrigerator 1 may transmit data to the first server SV 1 and/or the second server SV 2 , and receive data from the first server SV 1 and/or the second server SV 2 .
- the refrigerator 1 may transmit data of user speech input via the microphone 182 , to the first server SV 1 and the first server SV 1 may transmit analysis information including the user intention, which is acquired by analyzing the speech data, to the refrigerator.
- the refrigerator 1 may transmit the analyzed information of the user speech data to the second server SV 2 and may receive information, which is food information related to a certain application such as the store application, from the second server SV 2 .
- the refrigerator 1 may communicate with the first server SV 1 and receive the analysis information of the user speech data via the first server SV 1 .
- the refrigerator 1 may communicate with the second server SV 2 and receive the information related to the certain application via the second server SV 2 .
- the communication between the refrigerator and the server will be described in more detail with reference to FIGS. 9 and 10 .
- the refrigerator 1 transmits speech data to the first server SV 1 when a speech command being uttered from the user is input ( 1000 ). Based on the speech data being received, the first server SV 1 may derive the user intention by analyzing the speech data ( 1010 ).
- the controller 110 of the refrigerator 1 Based on user speech command being input via the microphone 182 , the controller 110 of the refrigerator 1 converts an analog speech signal into speech data that is a digital signal and transmits the speech data to the first server SV 1 through the communication circuitry 140 .
- the controller 110 of the refrigerator 1 may convert an analog speech signal into a digital speech signal in a pulse code modulation method. For example, based on a speech command, such as “let me know today weather” being input, the controller 110 of the refrigerator 1 may convert the speech command into a digital signal and transmit the digital signal to the first server SV 1 .
- the first server SV 1 may include an automatic speech recognition (ASR) portion configured to remove a noise by performing a pre-processing on the speech data, which is transmitted from the refrigerator 1 , and configured to convert the speech data into a text by analyzing the speech data, and a natural language understanding (NLU) portion configured to identify a user intention based on the text, which is acquired by the conversion by the automatic speech recognition portion.
- ASR automatic speech recognition
- NLU natural language understanding
- the first server SV 1 may include a learning network model that is trained using an artificial intelligence algorithm. In this case, the first server SV 1 may identify (or recognize, estimate, infer, predict) the user intention by applying speech data transmitted from the refrigerator 1 to the learning network model.
- the automatic speech recognition portion and the natural language understanding portion of the first server SV 1 may identify the user intention as requiring information about weather in the area where the current user is located.
- the first server SV 1 may transmit the analysis information of the speech data to the second server SV 2 ( 1020 ), and the second server SV 2 may transmit information, which is requested by the speech data according to the analysis information of the speech data, to the first server SV 1 ( 1030 ).
- the first server SV 1 may transmit the information, which is transmitted from the second server SV 2 , to the refrigerator 1 ( 1040 ).
- the first server SV 1 Based on the user intention contained in the speech data, the first server SV 1 selects the second server SV 2 providing information to an application related to the user intention, and the first server SV 1 transmits the analysis information of the speech data to the selected second server SV 2 .
- the first server SV 1 may transmit the analysis information of the speech data to the weather server providing information to the weather application among the second servers SV 2 .
- the second server SV 2 may search for information matching with the user intention, based on the analysis information of the speech data, and transmit the searched information to the first server SV 1 .
- the weather server may generate the current weather information of the area where the user is located, based on the analysis information of the speech data, and transmit the generated weather information to the first server SV 1 .
- the first server SV 1 converts the information transmitted from the second server SV 2 into a JavaScript Object Notation (JSON) file format and transmits the JSON file format to the refrigerator 1 .
- JSON JavaScript Object Notation
- the refrigerator 1 displays the information transmitted from the first server SV 1 on the display 120 as a card type user interface (UI), and outputs the information in a speech ( 1050 ).
- UI card type user interface
- the controller 110 of the refrigerator 1 may display the received information on the display 120 as the card type UI, and output the received information in a speech by using the TTS function.
- the controller 110 of the refrigerator 1 may display information including the user location, time and weather, on the display 120 as the card type UI.
- the controller 110 may output the weather information displayed in the card type UI, as a speech such as “Today's current weather in ** street is 20 degrees Celsius, humidity is 50 percent and clear without clouds”.
- the refrigerator 1 transmits the speech data to the first server SV 1 based on the speech command uttered by the user ( 1100 ). Based on the speech data being received, the first server SV 1 may derive the user intention by analyzing the speech data ( 1110 ).
- the controller 110 of the refrigerator 1 Based on a user speech command being input via the microphone 182 , the controller 110 of the refrigerator 1 converts an analog speech signal into speech data that is a digital signal, and transmits the speech data to the first server SV 1 through the communication circuitry 140 .
- the controller 110 of the refrigerator 1 may convert an analog speech signal into a digital speech signal in a pulse code modulation method. For example, based on a speech command, such as “play next song” being input, the controller 110 of the refrigerator 1 may convert the speech command into a digital signal and transmit the digital signal to the first server SV 1 . In addition, based on a speech command, such as “let me know apple pie recipe” being input, the controller 110 of the refrigerator 1 may convert the speech command into a digital signal and transmit the digital signal to the first server SV 1 .
- the first server SV 1 may include an automatic speech recognition (ASR) portion configured to remove a noise by performing a pre-processing on the speech data, which is transmitted from the refrigerator 1 , and configured to convert the speech data into a text by analyzing the speech data, and a natural language understanding (NLU) portion configured to identify a user intention based on the text, which is acquired by the conversion by the automatic speech recognition portion.
- ASR automatic speech recognition
- NLU natural language understanding
- the first server SV 1 may include a learning network model that is trained using an artificial intelligence algorithm. In this case, the first server SV 1 may identify (or recognize, estimate, infer, predict) the user intention by applying speech data transmitted from the refrigerator 1 to the learning network model.
- the automatic speech recognition portion and the natural language understanding portion of the first server SV 1 may identify the user intention as asking to play the next song to the song currently being played.
- the automatic speech recognition portion and the natural language understanding portion of the first server SV 1 may identify the user intention as asking an apple pie recipe.
- the first server SV 1 may transmit the analysis information of the speech data to the refrigerator 1 ( 1120 ), and the refrigerator 1 may transmit the analysis information of the speech data to the second server SV 2 ( 1130 ).
- the second server SV 2 may transmit information, which is requested by the speech data according to the analysis information of the speech data, to the refrigerator 1 ( 1140 ).
- the first server SV 1 converts the analysis information of the user speech data into a JSON file format, and transmits the JONS file format to the refrigerator 1 .
- the controller 110 of the refrigerator 1 may select an application based on the analysis information and output the analysis information to the selected application.
- the controller 110 of the refrigerator 1 may select the music application as an application capable of performing a function related to the user intention.
- the controller 110 may transmit a command for requesting to play the next song, to the selected music application.
- the controller 110 of the refrigerator 1 may select the recipe application as an application capable of performing a function related to the user intention.
- the controller 110 may transmit a command for requesting information on the apple pie recipe, to the selected recipe application.
- the selected application may transmit the analysis information to the second server SV 2 providing information.
- the second server SV 2 may search for information meeting the user intention based on the analysis information of the speech data and transmit the searched information to the refrigerator 1 .
- the music application may request information on the next song to the second server SV 2 providing information to the music application, which is the music server.
- the music server may select the next song to the currently played song based on the analysis information of the speech data, and provide information on the selected next song, to the music application.
- the recipe application may request information on the apple pie recipe to the second server SV 2 , which is configured to provide information to the recipe application and corresponds to the recipe server.
- the recipe server may provide a pre-stored apple pie recipe to the recipe application based on the analysis information of the speech data or search for other apple pie recipe and provide the searched recipe to the recipe application.
- the refrigerator 1 may execute the related application according to the received information ( 1150 ).
- the controller 110 may execute the selected application to provide information corresponding to the user speech according to the information transmitted from the second server SV 2 .
- the controller 110 executes the music application upon receiving the information on the next song being transmitted from the music server.
- the music application may stop playing the current song and play the next song according to the information on the next song transmitted from the music server.
- a music application UI may be displayed on the display 120 of the refrigerator 1 and thus the information on the play of the next song may be displayed.
- the output of the currently played song may be stopped and the next song may be output through the speaker 181 of the refrigerator 1 .
- the controller 110 executes the recipe application upon receiving the information on the apple pie recipe being transmitted from the recipe server.
- a recipe application UI may be displayed on the display 120 of the refrigerator 1 and thus the apple pie recipe may be displayed.
- a speech for reading the apple pie recipe may be output through the speaker 181 of the refrigerator 1 .
- the information on the user speech may be received via the communication between the first server SV 1 and the second server SV 2 .
- the information on the user speech may be received via the communication between the refrigerator 1 and the second server SV 2 .
- FIGS. 11A to 14 are views illustrating a method for a user to purchase food through the refrigerator according to one embodiment of the present disclosure.
- a speech for ordering a certain food with a wake word may be input via the microphone 182 ( 900 ). Based on the wake word being recognized, the controller 110 may analyze the user speech by executing the speech recognition function, receive a food list contained in the user speech 300 from the related application (e.g., store application), and display the food list in the card type UI on the display 120 ( 901 ).
- the related application e.g., store application
- the refrigerator 1 may analyze the user speech through the communication with the first server SV 1 by the method illustrated in FIG. 9 , so as to identify the user intention, and receive information, which is related to the user intention (e.g., food list) and transmitted from the second server SV 2 , via the first server SV 1 .
- the received information may be used as an input value of an application for displaying a food list, and thus the food list 300 may be displayed in the card type UI as illustrated in FIG. 11A .
- the result of the analysis of the user speech may be a food name.
- the food name may include not only the exact name of the food but also an idiom or slang referring to the food, a part of the food name, and a name similar to the food name. Further, the food name may also be an alias that is uttered or registered by the user of the refrigerator 1 .
- the food list 300 may be displayed in the card type UI at the bottom of the speech recognition UI 250 .
- the food list 300 may include a representative image 314 representing the food, food information 316 including the manufacturer, the name of the food, the amount, the quantity and the price, and an identification mark 312 , which is configured to distinguish the corresponding food from other food and is displayed in figures.
- the identification mark may be displayed in figures or characters.
- a tab 318 may be displayed at the bottom of the food list 300 , and the tab 318 may be selected when the user wants to find more food than the food contained in the food list 300 .
- the tab 318 may be displayed in a text “see more food at store*” indicating the function of the tab 318 .
- the store application providing the food information may be executed and thus the food information may be displayed on the display 120 .
- the controller 110 may output the food list 300 and a speech requesting confirmation about the selection of the food among the food contained in the food list 300 .
- the controller 110 may display a microphone-shaped image indicating that the speech is outputting through the speaker 181 . Further, the speech output through the speaker 181 may be displayed in the text on the first area 250 of the speech recognition UI.
- the controller 110 may recognize a first speech including the food name via the microphone 182 .
- the controller 110 may convert the first input speech into a digital speech signal.
- the controller 110 may recognize the digital speech signal through the learning network model, which is trained using the artificial intelligence algorithm, thereby acquiring the food name corresponding to the first speech.
- the food name contained in the first speech and the food name acquired by the recognition may be the same or different from each other.
- the food name contained in the first speech may be an idiomatic word, an alias, or a part of the food name, but the food name acquired by the recognition may be the full name, the sale name, or the trademark name.
- the learning network model recognizing the first speech may be stored in the storage 130 or in the first server SV 1 recognizing first speech data by receiving the first speech data.
- the controller 110 may transmit the first speech, which is converted into the digital speech signal, to the first server SV 1 .
- the first server SV 1 may recognize (or estimate, infer, predict, identify) the food name corresponding to the first speech by applying the first speech as an input value to the learning network model that is trained using the artificial intelligence algorithm. Based on the result of the recognition, the first server SV 1 may transmit the food name corresponding to the first speech to the controller 110 .
- the controller 110 may recognize the food name corresponding to the first speech by applying the first speech, which is converted into the digital speech signal, to the learning network model stored in the storage 130 .
- the controller 110 may display the food list 300 , which includes food information 316 on the acquired food name and the identification mark 312 for distinguishing the food from other food, on the display 120 .
- the controller 110 may recognize a second speech referring to the identification mark 312 of the food list.
- the controller 110 may convert the second input speech into a digital speech signal.
- the controller 110 may acquire an identification mark corresponding to the second speech by recognizing the digital speech signal through the learning network model that is trained using the artificial intelligence algorithm.
- the learning network model recognizing the second speech may be stored in the storage 130 or in the first server SV 1 analyzing speech data.
- the controller 110 may transmit the second speech, which is converted into the digital speech signal, to the first server SV 1 .
- the first server SV 1 may recognize (or estimate, infer, predict, identify) the identification mark corresponding to the second speech by applying the second speech as an input value to the learning network model that is trained using the artificial intelligence algorithm. Based on the result of the recognition, the first server SV 1 may transmit the identification mark corresponding to the second speech to the controller 110 .
- the controller 110 may recognize the identification mark corresponding to the second speech by applying the second speech, which is converted into the digital speech signal, to the learning network model stored in the storage 130 .
- the controller 110 may display at least one piece of food purchase information represented by the identification mark 312 on the display 120 .
- FIG. 11B is a view illustrating a method for a user to purchase food through a refrigerator according to another embodiment of the present disclosure.
- a speech for ordering a certain food may be input via a microphone 182 ( 950 ).
- a controller 110 may recognize a first speech including a food name. As described above, the controller 110 may recognize the first speech by applying the first speech, which is input through the microphone 182 , to the learning network model that is trained using the artificial intelligence algorithm.
- the controller 110 may identify whether a food 951 - 1 , which is related to the food name corresponding to the first speech, is placed in the storage compartment 20 ( 951 ).
- the food related to the food name may be a food having a food name as at least a part of the name, or a food having the name similar with the food, or a food substitutable to the food.
- the refrigerator 1 may store storage information list, which includes the name of the foods placed in the storage compartment 20 , in the storage 130 .
- the food name of the storage information list may be generated by the user input, when the user stores the food in the storage compartment 20 of the refrigerator 1 .
- information, which is input in the speech or in the text by the user may be stored as the food names.
- the food name contained in the identification information may be stored as the food name by tagging the identification information of the food (e.g., bar code) by the user.
- the food names may be generated by recognizing an image of the storage compartment 10 captured by the camera installed in the refrigerator 1 .
- the refrigerator 1 may recognize the food name by applying the image, which is captured by the camera, to the learning network model, which is trained using the artificial intelligence algorithm, and store the recognized food name.
- the refrigerator 1 may search for whether the food name corresponding to the first speech is on the storage information list, and identify whether a food related to the food name is placed in the storage compartment 20 .
- the controller 110 may allow the display 120 to display information indicating the presence of the food 951 - 1 ( 952 ).
- Information indicating the presence of the food may include at least one of a video or image 952 - 1 , which is acquired by capturing the food, a notification text 952 - 2 (e.g., “food similar to what you are looking for is in the refrigerator”) indicating the presence of the food, or a notification sound indicating the presence of the food.
- the user may want to order additional food.
- an UI (not shown) for ordering an additional food may be displayed together.
- the controller 110 may allow the display 120 to display the food list 300 , as illustrated in FIG. 11A .
- FIG. 11C is a view illustrating a method for a user to purchase food through a refrigerator according to yet another embodiment of the present disclosure.
- a speech for ordering a certain food may be input via a microphone 182 ( 960 ).
- a controller 110 may recognize a first speech including a food name. As described above, the controller 110 may recognize the first speech by applying the first speech, which is input through the microphone 182 , to the learning network model that is trained using the artificial intelligence algorithm.
- the controller 110 may identify whether a food, which is related to the food name corresponding to the first speech, is placed in a storage compartment 20 ( 961 ). For example, the controller 110 may identify whether a food related to the food name is placed in the storage compartment 20 , based on image of the storage compartment 20 captured by the camera installed in the refrigerator 1 . Alternatively, storage information list about the foods placed in the storage compartment 20 may be stored in the storage 130 of the refrigerator 1 . In this case, the refrigerator 1 may search for whether the food name is on the storage information list, and identify whether the food related to the food name is placed in the storage compartment 20 .
- the controller 110 may display a food list 962 - 1 including food information having a name corresponding to the food name and a mark for identifying the food information, on the display 120 in order to purchase the food ( 962 ).
- the food list 962 - 1 may be displayed in the card type UI at the bottom of the speech recognition UI.
- FIG. 11D is a view illustrating a method for a user to purchase food through a refrigerator according to yet another embodiment of the present disclosure.
- a speech for ordering a certain food may be input via a microphone 182 ( 970 ).
- the controller 110 may recognize a first speech including a food name. As described above, the controller 110 may recognize the first speech by applying the first speech, which is input through the microphone 182 , to the learning network model that is trained using the artificial intelligence algorithm. In addition, the controller 110 may recognize a user uttering the first speech by applying the first speech, which is input through the microphone 182 , to the learning network model that is trained using the artificial intelligence algorithm, and acquire user information on the user uttering the first speech.
- the controller 110 may transmit the first speech, which is converted into the digital speech signal, to the first server SV 1 .
- the first server SV 1 may recognize (or estimate, infer, predict, identify) the user uttering the first speech by applying the first speech as an input value to the learning network model that is trained using the artificial intelligence algorithm.
- the first server SV 1 may transmit the user information on the user uttering the first speech to the controller 110 .
- the controller 110 may recognize the user uttering the first speech by applying the first speech, which is converted into the digital speech signal, to the learning network model stored in the storage 130 .
- the controller 110 may acquire food information preferred by the user uttering the first speech related to the food name ( 971 ). For example, when a large number of users use the refrigerator 1 , each user may have their own preferred food depending on the manufacturer of the food, the type of food, the capacity of the food, and the place where the food is sold even the foods have the same name. Accordingly, the storage 130 of the refrigerator 1 may store a user-specific food information list 971 - 1 in which information of preferred food for each user is registered with respect to the same name food.
- the user-specific food information may be determined based on a purchase history about the purchase by each user, an input history directly registered by the user, and food information history corresponding to the identification mark selected from the identification marks corresponding to the food name.
- the user-specific food information list may store the food information according to the priority preferred for each user. For example, the food information, in which the priority is selected in order of the user's purchase history when a higher priority is given to large purchase history, may be stored.
- the controller 110 may acquire the food information preferred by a specific user based on the food information list stored in the storage 130 ( 971 ).
- the controller 110 may acquire the food information preferred by a specific user by applying the food name and the user information to the learning network model that is trained using the artificial intelligence algorithm.
- the learning network model may be a learning network model that is trained by using the above mentioned purchase history, input history, and food information history corresponding to the selected identification mark.
- the controller 110 may display a food list 972 - 1 including the information of the food preferred by the user and an identification mark for distinguishing the food from other food preferred by the user, on the display 120 .
- the information of the food preferred by the user may be arranged in order of the user preference.
- the user can utter a speech for selecting a certain food in response to a speech for requesting confirmation for selecting a food among foods contained in the food list ( 902 ).
- the user can utter an identification mark for distinguishing foods, as a speech.
- the user may select the identification mark by uttering “number 1” or “first thing” ( 903 and 904 ).
- the controller 110 may analyze the user speech through the speech recognition function. As illustrated in FIG. 10 , the controller 110 may transmit the user speech data to the first server SV 1 and receive the analyzed information of the speech data from the first server SV 1 , thereby identifying the user intention. Alternatively, the controller 110 of the refrigerator 1 itself may analyze the user speech using the speech recognition function and identify the user intention.
- the controller 110 may output the analyzed information of the user speech data to the store application through the method illustrated in FIG. 11 and when receiving the food information from the second server SV 2 , the controller 110 may display the UI on the display 120 by executing the store application, as illustrated in FIG. 13 .
- a food confirmation UI 350 indicating specific information on a food selected by the user may be displayed.
- the food confirmation UI may include a representative image region 354 representing a food, a food information area 356 including a manufacturer, a food name, a capacity, a quantity and a price, a quantity control area 358 for controlling the quantity of food, and a food purchase area 360 including a cart tab for putting a food in the cart, a buy now tab for immediately purchasing a food, a gift tab for presenting a food, and a tab for bookmarking the selected food.
- the number and arrangement of areas contained the food purchase UI, as illustrated in FIG. 13 is merely an example and thus may include other numbers, arrangements, or other contents.
- the user may say “put it to the cart” to put the food, which is displayed on the food confirmation UI, in the cart ( 907 ).
- the user may say “buy it” to immediately purchase the food, which is displayed on the food confirmation UI ( 912 ).
- the controller 110 may display the speech recognition UI on the display 120 and display a text “add the selected food to the cart” on the speech recognition UI, as illustrated in FIG. 14 ( 909 ).
- the controller 110 may output the text, which is displayed on the speech recognition UI, via the speaker 181 by a speech using the TTS function.
- the speech recognition UI may be displayed in the card type UI on the store application screen.
- the store application may display a cart screen on the display 120 ( 911 ).
- the controller 110 may display the speech recognition UI on the display 120 and display a text “an error occurs in adding the food to the cart. Please check the contents and try again” on the speech recognition UI, as illustrated in FIG. 14 ( 910 ).
- the controller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function. In this case, the user may say again “put it to the cart” to add the food to the cart again.
- the controller 110 may display the speech recognition UI on the display 120 and display a text “go to the purchase page of the selected food” on the speech recognition UI ( 914 ).
- the controller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function.
- the speech recognition UI may be displayed in the card type UI on the store application screen.
- the store application may display the food purchase screen on the display 120 ( 915 ).
- the controller 110 may display the speech recognition UI on the display 120 and display a text “an error occurs in going to the purchase page. Please check the contents and try again” on the speech recognition UI, as illustrated in FIG. 14 ( 916 ).
- the controller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function. In this case, the user may say again “buy it” to purchase the food again.
- the user may say “cancel” when the user does not want to select the food contained the food list or the user wants to cancel the food purchase process ( 917 ).
- the controller 110 may display the speech recognition UI on the display 120 and display a text “food search canceled” on the speech recognition UI ( 918 ).
- the controller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function.
- the speech recognition UI may be displayed in the card type UI on the store application screen.
- the user can say the food name not the identification mark ( 919 ).
- the controller 110 may display the speech recognition UI on the display 120 and display a text “please tell the number” on the speech recognition UI ( 920 ).
- the controller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function.
- the controller 110 may not immediately execute the application, but provide information in the card type UI first and when certain information in the information, which is provided in the card type UI, is selected, the controller 110 may execute the related application.
- FIGS. 15 and 16 are views illustrating a method in which the refrigerator according to one embodiment of the present disclosure performs a command in accordance with a speech command.
- the controller 110 of the refrigerator 1 may perform a command corresponding to the speech command, as illustrated in FIG. 15 .
- the controller 110 of the refrigerator 1 may analyze the user speech through the speech recognition function, thereby identifying the user intention.
- the controller 110 of the refrigerator 1 may analyze the user speech through the speech recognition function.
- the controller 110 of the refrigerator 1 may transmit the user speech data to the first server SV 1 and receive the analyzed information of the speech data from the first server SV 1 .
- the controller 110 of the refrigerator 1 may identify whether the music application is executed that is the music is played ( 1301 ). When the music is played, the controller 110 may immediately stop playing the music ( 1302 ).
- the controller 110 may display the speech recognition UI on the display 120 and display a text “music is not playing” on the speech recognition UI ( 1303 ). In addition, via the speaker 181 , the controller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function.
- a speech command in which the target is not specified may be input via the microphone 182 , which is different from the example of FIG. 15 . That is, as illustrated in FIG. 16 , a speech “stop”, in which the target of the command is omitted, may be input via the microphone 182 ( 1300 ).
- the controller 110 of the refrigerator 1 may analyze the user speech through the speech recognition function, thereby identifying the user intention.
- the controller 110 of the refrigerator 1 may analyze the user speech through the speech recognition function.
- the controller 110 of the refrigerator 1 may transmit the user speech data to the first server SV 1 and receive the analyzed information of the speech data from the first server SV 1 .
- the controller 110 may identify whether the controller is capable of performing the speech command on the application displayed on the display 120 which is currently executed ( 1310 ).
- the controller 110 When the controller 110 is capable of performing the speech command on the application displayed on the display 120 which is currently executed, the controller 110 may perform the speech command on the application displayed on the display 120 ( 1320 ).
- the controller 110 may immediately stop the function in which the recipe applications reads the recipe.
- the controller 110 may identify whether to perform the command on the application displayed on the display 120 , which is currently executed, as the first priority.
- the controller 110 When the controller 110 is not capable of performing the speech command on the application displayed on the display 120 which is currently executed, the controller 110 may display the speech recognition UI on the display 120 , thereby notifying that it is impossible to perform the speech command ( 1330 ).
- the controller 110 may display the speech recognition UI on the display 120 and display a text “not reading the recipe” on the speech recognition UI.
- the controller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function.
- the controller 110 may identify whether a predetermined priority about the application related to the input speech command is present ( 1340 ). When the predetermined priority is present, the controller 110 may perform the speech command on the application according to the priority ( 1350 ).
- the controller 110 may perform the speech command on the application ranked in the highest priority, and when it is impossible to perform the speech command on the application having the highest priority, the controller 110 may perform the speech command on the application ranked in the next highest priority.
- the controller 110 stops playing the music being played by the music application.
- the controller 110 may identify whether to perform the application ranked in the next highest priority. Accordingly, the controller 110 may perform the speech command on the application in order of the highest priority.
- the controller 110 may request a confirmation about whether to perform the speech command on other application ( 1360 ).
- the controller 110 may identify whether to perform the speech command on other application.
- the controller 110 may select the radio application as the target of the speech command, as illustrated in FIG. 16 , and before executing the speech command on the radio application, the controller 110 may display the speech recognition UI and request confirmation about whether to execute the speech command.
- the controller 110 may display the speech recognition UI on the display 120 and display a text “do you want to stop the radio” on the speech recognition UI.
- the controller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function.
- the controller 110 When an answer to the confirmation request is input via the microphone 182 , the controller 110 performs a command according to the input answer ( 1370 ).
- the controller 110 stops the execution of the radio application.
- the refrigerator 1 may perform the command meeting the user intention using the method illustrated in FIG. 16 .
- FIGS. 17 to 20 are views illustrating a method in which the refrigerator according to one embodiment of the present disclosure identifies a user and provides information corresponding to the identified user.
- the user can refer to a specific user, and the user may ask to provide information related to the specific user.
- the controller 110 of the refrigerator 1 may analyze the user speech through the speech recognition function, thereby identifying the user intention.
- the controller 110 of the refrigerator 1 may analyze the user speech through the speech recognition function. Alternatively, the controller 110 of the refrigerator 1 may transmit the speech data to the first server SV 1 and receive the analyzed information of the speech data from the first server SV 1 .
- the calendar application has been described as an application capable of providing user-specific information, but the application is not limited to the calendar application. Therefore, the speech output setting application (which is referred to as “morning brief” in FIG. 5 ) capable of outputting contents, which are specified for the user, as a speech, may provide user-specific information according to methods illustrated in FIGS. 17 to 20 .
- the controller 110 of refrigerator 1 may identify the user intention contained in the speech by recognizing the user name contained in the speech, and perform the command according to the speech. That is, when it is identified that the user intention is to show a calendar of a mother among a plurality of users, particularly, this month calendar among the mother's calendars, the controller 110 displays mother's this month calendar 290 on the display 120 by executing the calendar application, as illustrated in FIG. 18 ( 2030 ).
- a calendar provided from the calendar application may include a calendar part 291 on which date and day of the week are displayed, and a schedule part 292 on which today's schedule is displayed.
- the configuration of the calendar illustrated in FIG. 18 is merely an example and thus the configuration and arrangement of the calendar may have a variety of shapes.
- the controller 110 of the refrigerator 1 fails to recognize the name of a specific user contained in the speech, the controller 110 performs a process of selecting a specific user among the stored users ( 2020 ).
- the controller 110 of the refrigerator 1 may display the speech recognition UI on the display 120 and then display a text “please select a profile” on the speech recognition UI ( 2021 ).
- the controller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function.
- the controller 110 may display a user profile list, which is pre-registered in the refrigerator 1 , together with the speech recognition UI.
- the profile list includes registered user names and identification mark for identifying each user name.
- the identification mark may be displayed in figures, but is not limited thereto. Because a mark capable of distinguishing a plurality of users from each other is sufficient to be the identification mark, the identification mark may be displayed in figures or characters.
- the user can utter a speech for selecting a specific user in response to a speech for requesting confirmation for selecting a user among users contained in the profile list ( 2022 ).
- the user can utter an identification mark, in a speech.
- the user may select the identification mark by uttering “two” or “second” ( 2023 and 2024 ).
- the controller 110 may analyze the user speech through the speech recognition function. As illustrated in FIG. 10 , the controller 110 may transmit the user speech data to the first server SV 1 and receive the analyzed information of the speech data from the first server SV 1 , thereby identifying the user intention. Alternatively, the controller 110 of the refrigerator 1 itself may analyze the user speech using the speech recognition function and identify the user intention.
- the controller 110 may select a user name corresponding to the identification mark uttered by the user, and display the selected user's calendar on the display 120 . That is, the controller 110 selects a user corresponding to “two” or “second” uttered by the user, as a mother, and as illustrated in FIG. 18 , displays the mother's this month calendar on the display 120 ( 2030 ).
- the user can utter the user name, not the identification mark ( 2025 ).
- the controller 110 may display the speech recognition UI on the display 120 in response to the user utterance, and display a text “say the profile number” on the speech recognition UI ( 2026 ).
- the controller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function.
- the controller 110 may display the speech recognition UI on the display 120 and display a text “it is impossible to get accurate information” on the speech recognition UI, as illustrated in FIG. 19 ( 2028 ).
- the controller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function.
- the controller 110 may display the speech recognition UI on the display 120 in response to the user utterance, and display a text “it is canceled” on the speech recognition UI.
- the controller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function.
- the user may ask to provide information, which is stored differently for each user.
- the controller 110 of the refrigerator 1 may analyze the user speech through the speech recognition function, thereby identifying the user intention.
- the controller 110 of the refrigerator 1 may analyze the user speech through the speech recognition function.
- the controller 110 of the refrigerator 1 may transmit the speech data to the first server SV 1 and receive the analyzed information of the speech data from the first server SV 1 .
- the controller 110 of the refrigerator 1 compares the speech data with speech data of the users, which is stored in advance, and identifies whether speech data matching with the speech data is present the pre-stored speech data ( 2041 ).
- the controller 110 of the refrigerator 1 compares parameters of the speech data, such as the amplitude, waveform, frequency or period, which is input via the microphone 182 , with parameters of the pre-stored speech data, and selects speech data, which is identified as to be identical to the speech data input via the microphone 182 , among the pre-stored speech data.
- parameters of the speech data such as the amplitude, waveform, frequency or period
- the controller 110 may display a user's calendar indicated by the selected speech data, on the display 120 ( 2042 ). For example, when it is identified that the matched speech data among the pre-stored speech data is the mother, the controller 110 may display the mother's this month calendar on the display 120 .
- the controller 110 of the refrigerator 1 may display the speech recognition UI on the display 120 and then display a text “please select a profile” on the speech recognition UI as illustrated in FIG. 19 ( 2021 ).
- the controller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function.
- the controller 110 may display a user profile list, which is pre-registered in the refrigerator 1 , together with the speech recognition UI.
- the user can utter a speech for selecting a specific user in response to a speech for requesting confirmation for selecting a user among users contained in the profile list ( 2022 ).
- the user can utter an identification mark, in a speech.
- the user may select the identification mark by uttering “two” or “second” ( 2023 and 2024 ).
- the controller 110 may select a user name corresponding to the identification mark uttered by the user, and display the selected user's calendar on the display 120 . That is, the controller 110 selects a user corresponding to “two” or “second” uttered by the user, as the mother, and as illustrated in FIG. 18 , displays the mother's this month calendar on the display 120 ( 2030 ).
- the controller 110 may display the speech recognition UI on the display 120 in response to the user's utterance, and display a text “say the profile number” on the speech recognition UI ( 2026 ).
- the controller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function.
- the controller 110 may display the speech recognition UI on the display 120 and display a text “it is impossible to get accurate information” on the speech recognition UI, as illustrated in FIG. 19 ( 2028 ).
- the controller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function.
- the refrigerator 1 may identify the user and provide information that is appropriate for the user, according to the method illustrated in FIG. 20 .
- FIG. 21A is a block diagram of a controller for data training and recognition according to one embodiment of the present disclosure.
- a controller 2100 may include a data leaner 2110 and a data recognizer 2120 .
- the controller 2100 may correspond to the controller 110 of the refrigerator 1 .
- the controller 2100 may correspond to a controller (not shown) of the first server SV 1 .
- the data leaner 2110 may train a learning network model to have a criterion to recognize speech data. For example, the data leaner 2110 may train the learning network model to have criteria to recognize the food name corresponding to the first speech or the identification mark corresponding to the second speech. Alternatively, the data leaner 2110 may train the learning network model to have a criterion to recognize images. For example, the data leaner 2110 may train the learning network model to have a criterion to recognize foods stored in the storage compartment based on images captured by the storage compartment of the refrigerator 1 .
- the data leaner 2110 may train the learning network model (i.e., data recognition model) using training data according to a supervised learning method or an unsupervised learning method based on the artificial intelligence algorithms.
- the learning network model i.e., data recognition model
- the data leaner 2110 may train the learning network model by using training data such as speech data and a text (e.g., food names and identification marks) corresponding to the speech data.
- the data leaner 2110 may train the learning network model by using training data such as speech data and user information on a user uttering the speech data.
- the data leaner 2110 may train the learning network model by using training data such as image data and a text (e.g., food names) corresponding to the image data.
- the data recognizer 2120 may recognize speech data by applying speech data as feature data to the learning network model that is trained. For example, the data recognizer 2120 may recognize the food name or the identification mark corresponding to the speech data, by applying the speech data related to the first speech or the second speech, as feature data to the learning network model.
- the recognition result of the learning network model i.e., data recognition model
- the data recognizer 2120 may recognize the user uttering the speech data by applying the speech data as feature data to the learning network model.
- the data recognizer 2120 may recognize image data by applying image data, which is acquired by capturing the storage compartment 20 , as feature data to the learning network model that is trained. For example, the data recognizer 2120 may recognize the food name corresponding to the image data, by applying the image data as feature data to the learning network model.
- At least one of the data leaner 2110 and the data recognizer 2120 may be manufactured as at least one hardware chip and then mounted to an electronic device.
- at least one of the data leaner 2110 and the data recognizer 2120 may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI) or manufactured as a part of a conventional general purpose processor (e.g., CPU or application processor) or a dedicated graphics processor (e.g., GPU) and then mounted to the above mentioned electronic device.
- the dedicated hardware chip for the artificial intelligence is a dedicated processor specialized for probability calculation, and it has a higher parallel processing performance than conventional general purpose processors and thus it can quickly process the computation in artificial intelligence such as machine learning.
- the data leaner 2110 and the data recognizer 2120 may be mounted on one electronic device or on separate electronic devices, respectively.
- one of the data leaner 2110 and the data recognizer 2120 may be contained in the refrigerator 1 and the other may be contained in the server SV 1 .
- the wired or wireless communication at least a part of the learning network model constructed by the data leaner 2110 may be provided to the data recognizer 2120 and data, which is input to the data recognizer 2120 , may be provided as additional training data to the data leaner 2110 .
- At least one of the data leaner 2110 and the data recognizer 2120 may be implemented as a software module.
- the software module may be stored in non-transitory computer readable media.
- at least one software module may be provided by Operating System (OS) or a certain application.
- OS Operating System
- some of the at least one software module may be provided by the OS and the remaining some may be provided by a certain application.
- FIG. 21B is a detail block diagram of the data learner 2110 and the data recognizer 2120 according to one embodiment of the present disclosure.
- FIG. 21B is a block diagram of the data leaner 2110 according to one embodiment of the present disclosure.
- the data learner 2110 may include a data acquirer 2110 - 1 , a pre-processor 2110 - 2 , a training data selector 2110 - 3 , a model learner 2110 - 4 and a model evaluator 2110 - 5 .
- the data learner 2110 may necessarily include the data acquirer 2110 - 1 and the model learner 2110 - 4 , and selectively include at least one of the pre-processor 2110 - 2 , the training data selector 2110 - 3 , and the model evaluator 2110 - 5 , or may not include all of the pre-processor 2110 - 2 , the training data selector 2110 - 3 , and the model evaluator 2110 - 5 .
- the data acquirer 2110 - 1 may acquire data need for learning a criterion for recognizing speech data or image data.
- the data acquirer 2110 - 1 may acquire speech data or image data.
- the data acquirer 2110 - 1 may acquire speech data or image data from the refrigerator 1 .
- the data acquirer 2110 - 1 may acquire speech data or image data from a third device (e.g., a mobile terminal or a server) connected to the refrigerator 1 through the communication.
- the data acquirer 2110 - 1 may acquire speech data or image data from a device configured to store or manage training data, or a data base.
- the pre-processor 2110 - 2 may pre-process the speech data or image data.
- the pre-processor 2110 - 2 may process the acquired speech data or image data into a predetermined format so that the model learner 2110 - 4 , which will be described later, may use the data acquired for learning for the situation determination.
- the pre-processor 2110 - 2 may remove noises from data, such as the speech data or image data, acquired by the data acquirer 2110 - 1 so as to select effective data, or process the data into a certain format.
- the pre-processor 2110 - 2 may process the acquired data form into a form of data suitable for learning.
- the training data selector 2110 - 3 may randomly select speech data or image data, which is needed for learning, from the pre-processed data according to the pre-determined criteria or the training data selector 2110 - 3 may randomly select the speech data or image data.
- the selected training data may be provided to the model leaner 2110 - 4 .
- the pre-determined criteria may include at least one of an attribute of data, a generation time of data, a place where data is generated, an apparatus for generating data, a reliability of data, and a size of data.
- training data selector 2110 - 3 may select training data according to the criterion that is pre-determined by the training of the model learner 2110 - 4 described later.
- the model leaner 2110 - 4 may train the learning network model so as to have a criterion for recognizing speech data or image data based on the training data.
- the model leaner 2110 - 4 may train the learning network model so as to have a criterion for recognizing the food name or the identification mark corresponding to the speech data.
- the model leaner 2110 - 4 may train the learning network model so as to have a criterion for recognizing the food stored in the storage compartment based on the image data of the storage compartment of the refrigerator 1 .
- the learning network model may be a model constructed in advance.
- the learning network model may be a model that is pre-constructed by receiving basic training data (e.g., sample image data or speech data).
- the learning network model may be set to recognize (or determine, estimate, infer, predict) the food name or the identification mark corresponding to the speech.
- the learning network model may be set to recognize (or determine, estimate, infer, predict) the user uttering the speech.
- the learning network model may be set to recognize (or determine, estimate, infer, predict) the food stored in the storage compartment based on the image data.
- the learning network model may be a model based on a neural network.
- the learning network model may be designed to mimic the human brain structure on a computer.
- the learning network model may include a plurality of network nodes having weights to mimic the neuron of the neural network.
- the plurality of network nodes may form the respective connection to mimic the synaptic activity of neuron that sends and receive a signal through the synapses.
- the learning network model may include a neural network model or a deep learning model developed from the neural network model.
- the plurality of network nodes may be placed in different depth (or, layers) in the deep learning model and send and receive data according to the convolution connection.
- a deep neural network (DNN), a recurrent neural network (RNN), or a bicommandal recurrent deep neural network (BRDNN) may be used as the learning network model, but is not limited thereto.
- the model learner 2110 - 4 may select a learning network model, which is closely related to input training data and basic training data, as a learning network model to be trained. For example, the model learner 2110 - 4 may train the learning network model by using the learning algorithm including error back-propagation method or gradient descent method. Alternatively, the model learner 2110 - 4 may train the learning network model through the supervised learning method using an input value as the training data. Alternatively, the model learner 2110 - 4 may allow the learning network model to learn through the unsupervised learning method, and the unsupervised learning method learns without any the supervision and finds out a criterion. Alternatively, the model learner 2110 - 4 may train the learning network model through the reinforcement learning method using a feedback related to whether a result of learning is correct or not.
- the model learner 2110 - 4 may train the learning network model through the reinforcement learning method using a feedback related to whether a result of learning is correct or not.
- the model learner 2110 - 4 may store the trained learning network model.
- the model learner 2110 - 4 may store the trained learning network model in the storage 130 of the refrigerator 1 .
- the model learner 2110 - 4 may store the trained learning network model in the memory of the first server SV 1 or the memory of the second server SV 2 connected to the refrigerator 1 via a wired or wireless network.
- the storage 130 in which the trained learning network model is stored may also store instructions or data associated with, for example, at least one other element of the refrigerator 1 .
- the memory may also store software and/or programs.
- the program may include a kernel, a middleware, an application programming interface (API), and/or an application program (or “application”).
- the model evaluator 2110 - 5 may input evaluation data to the learning network model and, when a recognition result output from the evaluation data does not satisfy a predetermined criterion, the model evaluator 2110 - 5 may make the model learner 2110 - 4 to learn again.
- the evaluation data may be pre-set data for evaluating the learning network model.
- the model evaluator 2110 - 5 may evaluate that the predetermined criterion is not satisfied. For example, when the predetermined criterion is defined as a ratio of 2% and the trained learning network model outputs incorrect recognition results for evaluation data exceeding 20 out of a total of 1000 pieces of evaluation data, the model evaluator 2110 - 5 may evaluate that the trained learning network model is inappropriate.
- the model evaluator 2110 - 5 may evaluate whether each of the trained learning network models satisfies the predetermined criteria and may select a learning network model satisfying the predetermined criteria as a final learning network model. In this case, when the plurality of learning network models satisfying the predetermined criteria is present, the model evaluator 2110 - 5 may select one or a certain number of learning network model, as a final learning network model, and the one or certain number of learning network model may be pre-selected according to the evaluation score when the higher priority is given to the higher evaluation score.
- At least one of the data acquirer 2110 - 1 , the pre-processor 2110 - 2 , the training data selector 2110 - 3 , the model learner 2110 - 4 , and the model evaluator 2110 - 5 in the data learner 2110 may be manufactured as at least one hardware chip and then mounted to an electronic device.
- At least one of the data acquirer 2110 - 1 , the pre-processor 2110 - 2 , the training data selector 2110 - 3 , the model learner 2110 - 4 , and the model evaluator 2110 - 5 may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI) or manufactured as a part of a conventional general purpose processor (e.g., CPU or application processor) or a dedicated graphics processor (e.g., GPU) and then mounted to the above mentioned electronic device.
- AI artificial intelligence
- a conventional general purpose processor e.g., CPU or application processor
- a dedicated graphics processor e.g., GPU
- the data acquirer 2110 - 1 , the pre-processor 2110 - 2 , the training data selector 2110 - 3 , the model learner 2110 - 4 , and the model evaluator 2110 - 5 may be mounted on one electronic device or on separate electronic devices, respectively.
- some of the data acquirer 2110 - 1 , the pre-processor 2110 - 2 , the training data selector 2110 - 3 , the model learner 2110 - 4 , and the model evaluator 2110 - 5 may be contained in the refrigerator 1 and the remaining some may be contained in the server SV 1 or the server SV 2 .
- At least one of the data acquirer 2110 - 1 , the pre-processor 2110 - 2 , the training data selector 2110 - 3 , the model learner 2110 - 4 , and the model evaluator 2110 - 5 may be implemented as a software module.
- the software module may be stored in non-transitory computer readable media.
- at least one software module may be provided by Operating System (OS) or a certain application.
- OS Operating System
- some of the at least one software module may be provided by the OS and the remaining some may be provided by a certain application.
- FIG. 21B is a block diagram of the data recognizer 2120 according to one embodiment of the present disclosure.
- the data recognizer 2120 may include a data acquirer 2120 - 1 , a pre-processor 2120 - 2 , a feature data selector 2120 - 3 , a recognition result provider 2120 - 4 , and a model updater 2120 - 5 .
- the data recognizer 2120 may necessarily include the data acquirer 2120 - 1 and the recognition result provider 2120 - 4 , and selectively include at least one of the pre-processor 2120 - 2 , the feature data selector 2120 - 3 , and the model updater 2120 - 5 .
- the data recognizer 2120 may recognize the food name and the food mark corresponding to the speech data, or the user uttering the speech data, by applying the speech data as the feature data to the trained learning network model. Alternatively, the data recognizer 2120 may recognize the food stored in the storage compartment 20 of the refrigerator 1 , by applying the image data as the feature data to the trained learning network model.
- the data acquirer 2120 - 1 may acquire speech data needed for recognizing food name and the food mark and the user uttering the speech, from the speech data.
- the data acquirer 2120 - 1 may acquire image data for recognizing the food from the image data.
- the data acquirer 2120 - 1 may acquire data, which is directly input from the user or selected by the user, or may acquire a variety of sensing information detected by various sensors of the refrigerator 1 .
- the data acquirer 2120 - 1 may acquire data from an external device (e.g., mobile terminal or a server) communicated with the refrigerator 1 .
- the pre-processor 2120 - 2 may pre-process the speech data or image data.
- the pre-processor 2120 - 2 may process the acquired speech data or image data into a predetermined format so that the recognition result provider 2120 - 4 , which will be described later, may use the data acquired for training for the situation determination.
- the pre-processor 2120 - 2 may remove noises from data, such as the speech data or image data, acquired by the data acquirer 2120 - 1 so as to select effective data, or process the data into a certain format.
- the pre-processor 2120 - 2 may process the acquired data form into a form of data suitable for training.
- the feature data selector 2120 - 3 may randomly select speech data or image data, which is needed for recognition, from the pre-processed data according to the pre-determined criteria, or the feature data selector 2120 - 3 may randomly select the speech data or the image data, from the pre-processed data.
- the selected feature data may be provided to the model leaner 2110 - 4 .
- the pre-determined criteria may include at least one of an attribute of data, a generation time of data, a place where data is generated, an apparatus for generating data, a reliability of data, and a size of data.
- the recognition result provider 2120 - 4 may recognize the selected feature data by applying the selected feature data to the learning network model. For example, the recognition result provider 2120 - 4 may recognize the food name or the identification mark corresponding to the speech by applying the selected speech data to the learning network mode. Alternatively, the recognition result provider 2120 - 4 may recognize the user uttering the speech by applying the selected speech data to the learning network model. Alternatively, the recognition result provider 2120 - 4 may recognize the food corresponding to the image data by applying the selected image data to the learning network model.
- the model updater 2120 - 5 may allow the learning network model to be refined, based on the evaluation of the recognition result provided from the recognition result provider 2120 - 4 .
- the model updater 2120 - 5 may allow the model leaner 2110 - 4 to refine the learning network model, by providing the food name, the identification mark or the user information, which is provided from the recognition result provider 2120 - 4 , to the model leaner 2110 - 4 again.
- At least one of the data acquirer 2120 - 1 , the pre-processor 2120 - 2 , the feature data selector 2120 - 3 , the recognition result provider 2120 - 4 , and the model updater 2120 - 5 in the data recognizer 2120 may be manufactured as at least one hardware chip and then mounted to an electronic device.
- At least one of the data acquirer 2120 - 1 , the pre-processor 2120 - 2 , the feature data selector 2120 - 3 , the recognition result provider 2120 - 4 , and the model updater 2120 - 5 may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI) or manufactured as a part of a conventional general purpose processor (e.g., CPU or application processor) or a dedicated graphics processor (e.g., GPU) and then mounted to the above mentioned electronic device.
- AI artificial intelligence
- a conventional general purpose processor e.g., CPU or application processor
- a dedicated graphics processor e.g., GPU
- the data acquirer 2120 - 1 , the pre-processor 2120 - 2 , the feature data selector 2120 - 3 , the recognition result provider 2120 - 4 , and the model updater 2120 - 5 may be mounted on one electronic device or on separate electronic devices, respectively.
- some of the data acquirer 2120 - 1 , the pre-processor 2120 - 2 , the feature data selector 2120 - 3 , the recognition result provider 2120 - 4 , and the model updater 2120 - 5 may be contained in the refrigerator 1 and the remaining some may be contained in the server.
- At least one of the data acquirer 2120 - 1 , the pre-processor 2120 - 2 , the feature data selector 2120 - 3 , the recognition result provider 2120 - 4 , and the model updater 2120 - 5 may be implemented as a software module.
- the software module may be stored in non-transitory computer readable media.
- at least one software module may be provided by Operating System (OS) or a certain application.
- OS Operating System
- some of the at least one software module may be provided by the OS and the remaining some may be provided by a certain application.
- FIG. 22 is a flow chart of the refrigerator displaying information according to one embodiment of the present disclosure
- the refrigerator 1 may identify whether a first speech including a food name is input through the microphone 182 ( 2201 ).
- the refrigerator 1 may display a food list including food information corresponding to the food name and an identification mark for identifying food information, on the display 120 installed on the front surface of the refrigerator 1 based on the recognition of the first speech ( 2202 ).
- the refrigerator 1 may acquire the food name by recognizing the first speech using the learning network model, which is trained through the artificial intelligence algorithm, and display the food list including the food information corresponding to the food name, on the display 120 .
- the learning network model may be a learning network model that is trained using a plurality of speeches and a plurality of words corresponding to the plurality of speeches.
- the refrigerator 1 may acquire the food name and information on the user uttering the first speech by recognizing the first speech using the learning network model, which is trained through the artificial intelligence algorithm, and display the food list including the food information (e.g., information on a food preferred by the user) which corresponds to the food name and is related to the user information, on the display 120 .
- the learning network model may be a learning network model that is trained using a plurality of speeches and the user information respectively corresponding to the plurality of speeches.
- the refrigerator 1 may identify whether a food related to the recognized food name is placed in the storage compartment 20 of the refrigerator 1 . When it is identified that the food is not placed in the storage compartment 20 , the refrigerator 1 may display the food list including the food information corresponding to the food name, on the display 120 . Meanwhile, when it is identified that the food is placed in the storage compartment 20 , the refrigerator 1 may display information indicating that the food related to the recognized food name is placed in the storage compartment 30 , on the display 120 .
- the refrigerator 1 may identify whether a second speech indicating the identification mark is input via the microphone 182 ( 2203 ).
- the refrigerator 1 may display at least one piece of purchase information of food corresponding to the identification mark, on the display 120 installed on the front surface of the refrigerator 1 based on the recognition of the second speech ( 2204 ).
- the refrigerator 1 may acquire the identification mark by recognizing the second speech using the learning network model, which is trained through the artificial intelligence algorithm, and display the food purchase information corresponding to the identification mark, on the display.
- FIG. 23 is a flow chart of a network system according to one embodiment of the present disclosure.
- a network system may include a first element 2301 and second element 2302 .
- the first element 2301 may be the refrigerator 1
- the second element 2302 may be the server SV 1 in which the data recognition model is stored, or a cloud computing including at least one server.
- the first element 2301 may be a general purpose processor and the second element 2302 may be an artificial intelligence dedicated processor.
- the first element 2301 may be at least one application
- the second element 2302 may be an operating system (OS).
- OS operating system
- the second element 2302 may be more integrated, more dedicated, and have smaller delay, greater performance, and more resources, in comparison with the first element 2301 , and thus the second element 2302 may more quickly and effectively process operations, which are required at the time of generation, update, or application of the data recognition model, in comparison with the first element 2301 .
- an interface for transmitting/receiving data between the first element 2301 and the second element 2302 may be defined.
- an API having feature data to be applied to a data recognition model as an parameter may be defined.
- An API may be defined as a set of subroutines or functions that may be called by any one protocol (e.g., a protocol defined in the refrigerator 1 ) for a certain processing of another protocol (e.g., a protocol defined in the server SV 1 ).
- a protocol defined in the refrigerator 1 e.g., a protocol defined in the refrigerator 1
- another protocol e.g., a protocol defined in the server SV 1
- an environment in which an operation of another protocol may be performed by any one protocol through an API may be provided.
- the first element 2301 may receive the user first speech including the food name through the microphone 182 ( 2310 ).
- the first element 2301 may convert an input analog speech signal into speech data corresponding to a digital signal and transmit the digital signal to the second element 2302 . At this time, the first element 2301 may change the speech data according to the defined communication format and transmit the changed speech data to the second element 2302 ( 2311 ).
- the second element 2302 may recognize the received speech data ( 2312 ).
- the second element 2302 may recognize speech data by applying the received speech data into the trained learning network model, and acquire recognition information based on the result of the recognition.
- the second element 2302 may acquire the food name and information on the user uttering the speech, from the speech data.
- the second element 2302 may transmit the acquired recognition information to the first element 2301 ( 2313 ).
- the first element 2301 may acquire information, which is requested by the speech data, based on the received recognition information ( 2314 ).
- the first element 2301 may acquire food information corresponding to the food name, which corresponds to information requested by the speech data, from the storage of the first element 2301 or from a third element 2303 .
- the third element 2303 may be a server communicated with the first element 2301 .
- the server may be the server SV 2 configured to transmit information requested by the speech data in FIGS. 9 and 10 .
- the second element 2302 may directly transmit the recognized recognition information to the third element 2303 without passing through the first element 2301 ( 2323 ).
- the third element 2303 may acquire information requested by the speech data, based on the received recognition data and transmit the acquired information, which is requested by the speech data, to the first element 2301 ( 2324 ).
- the first element 2301 may acquire information, which is requested by the speech data, from the third element 2303 .
- the first element 2301 may display a food list including food information corresponding to the food name and an identification mark for identifying the food information, which is requested by the speech data ( 2315 ).
- the first element 2301 may display at least one piece of food purchase information corresponding to the identification mark.
- FIG. 24 is a view illustrating a method in which a user purchases food via a refrigerator according to another embodiment of the present disclosure.
- a first speech for ordering a specific food may be input through a microphone 182 ( 980 ).
- a controller 110 may recognize the first speech including the food name. As a recognition result, the controller 110 may acquire the recognized food name and information on the user uttering the first speech.
- the controller 110 may acquire food information, which is preferred by the user uttering the first speech, related to the food name ( 981 ).
- the controller 110 may display a food list including information on a plurality of foods preferred by the user and identification marks for identifying the plurality of pieces of food information, on the display 120 ( 982 ).
- a second speech indicating a particular identification mark among the identification marks contained in the food list may be input via the microphone 182 ( 983 ).
- the controller 110 may recognize the second speech including the identification mark. As a result of the recognition, the controller 110 may display the purchase information of the food corresponding to the identification mark, on the display 120 ( 984 ).
- a third speech requesting the food purchase may be input through the microphone 182 ( 985 ).
- the third speech requesting the food purchase may be a speech “buy now” uttered by the user.
- the controller 110 may recognize the third speech.
- the controller 110 may display an UI (e.g., UI for payment) on the display 120 for purchasing food as a recognition result of the third speech.
- the controller 110 may control the communication circuitry 140 so that a message indicating of purchase status is output to an external device 986 - 1 ( 986 ).
- the message indicating of purchase status may include a message for confirming whether to purchase a food before purchasing the food, a message for asking to input a password for purchasing the food, or a message indicating a result of purchasing after purchasing the food.
- the controller 110 may recognize at least one of the first speech and the third speech to acquire the information of the user uttering the speech.
- the controller 110 may transmit a message informing the purchase status to a user device corresponding to the user information, by using the acquired user information.
- the controller 110 may transmit information indicating the information on the stored food, to the user device.
- the controller 110 may transmit the message to a device belonging to a guardian of the user uttering the speech. For example, when a person who ordered the food is a child, the controller 110 may send a message to a device belonging to the child's parent to notify the purchase status described above.
- the external device 986 - 1 receiving the message may indicate the purchase status in a text, a speech, an image, a haptic manner or by emitting light, but is not limited thereto.
- the controller 110 may transmit a message indicating the purchase status to a device, which is pre-selected, or the external device 986 - 1 communicated with the refrigerator 1 .
- the controller 110 may transmit a message informing the purchase status to the external device 986 - 1 satisfying the predetermined criteria.
- the external device 986 - 1 satisfying the predetermined criteria may be an external device which is placed a certain place (e.g., a living room, or a main room), an external device having a signal, which is for communicating with the refrigerator 1 and has a certain intensity or more and, an external device placed in a certain distance from the refrigerator 1 .
- the predetermined criteria may be the external device 981 - 1 communicated with the refrigerator 1 according to a certain communication method.
- the certain communication method may include the local area communication system (e.g., WiFi, Bluetooth, or Zigbee) or the direct communication method.
- the controller 110 may transmit a push message informing the purchase status to the external device 986 - 1 via a communication channel established according to the local area communication method.
- module used for the present disclosure, for example, may mean a unit including one of hardware, software, and firmware or a combination of two or more thereof.
- a “module,” for example, may be interchangeably used with terminologies such as a unit, logic, a logical block, a component, a circuit, etc.
- the “module” may be a minimum unit of a component integrally configured or a part thereof.
- the “module” may be a minimum unit performing one or more functions or a portion thereof.
- the “module” may be implemented mechanically or electronically.
- the “module” may include at least one of an application-specific integrated circuit (ASIC) chip performing certain operations, a field-programmable gate arrays (FPGAs), or a programmable-logic device, known or to be developed in the future.
- ASIC application-specific integrated circuit
- FPGAs field-programmable gate arrays
- programmable-logic device known or to be developed in the future.
- the above-mentioned embodiments may be implemented in the form of a program instruction executed by a variety of computer means and stored in computer-readable storage medium.
- the computer-readable storage medium may include program instructions, data files, and data structures as itself or a combination therewith.
- the program instruction may be particularly designed to implement the present disclosure or may be implemented by using various functions or definition that are well-known and available to a group of ordinary skill in the computer software field.
- the computer-readable storage medium may include a hard disk, a magnetic media, a floppy disk, a magnetic media (e.g., a magnetic tape), an optical media (e.g., a compact disc read only memory (CD-ROM) and a digital versatile disc (DVD), a magneto-optical media (e.g., a floptical disk), and hardware devices (e.g., a read only memory (ROM), a random access memory (RAM), or a flash memory).
- a program instruction may include not only a mechanical code such as things generated by a compiler but also a high-level language code executable on a computer using an interpreter.
- the above hardware unit may be configured to operate via one or more software modules for performing an operation of the present disclosure, and vice versa.
- a method according to the disclosed embodiments may be provided in a computer program product.
- a computer program product may include software program, computer-readable storage medium in which S/W program is stored, or applications traded between a seller and a purchaser.
- a computer program product may include an application in the form of a software program (e.g., a downloadable application) that is electronically distributed by the refrigerator 1 , the server SV 1 , or the server SV 2 or a manufacturer of the devices or through an electronic marketplace (e.g., Google Play Store, App Store, etc.).
- a software program e.g., a downloadable application
- the storage medium may be a server of a manufacturer, a server of an electronic marketplace, or a storage medium of a relay server for temporarily storing the software program.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Acoustics & Sound (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Economics (AREA)
- Development Economics (AREA)
- Chemical & Material Sciences (AREA)
- Thermal Sciences (AREA)
- Mechanical Engineering (AREA)
- Combustion & Propulsion (AREA)
- General Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Software Systems (AREA)
- Tourism & Hospitality (AREA)
- Data Mining & Analysis (AREA)
- Computer Hardware Design (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Primary Health Care (AREA)
- Human Resources & Organizations (AREA)
- Entrepreneurship & Innovation (AREA)
Abstract
Description
- This application is a 371 National Stage of International Application No. PCT/KR2017/015548, filed Dec. 27, 2017, which claims priority to Korean Patent Application No. 10-2017-0000880, filed Jan. 3, 2017, and Korean Patent Application No. 10-2017-0179174, filed Dec. 26, 2017, the disclosures of which are herein incorporated by reference in their entirety.
- The present disclosure relates to a refrigerator and an information display method thereof, more particularly, to a refrigerator capable of communicating with an external device and an information display method thereof.
- In addition, the present disclosure relates to an artificial intelligence (AI) system capable of mimicking functions of the human brain such as recognition and determination, by using a machine learning algorithm, and an application thereof.
- Recently, refrigerators have been equipped with a display to display a temperature of a storage compartment and an operating mode of the refrigerator.
- Such a display not only enables a user to easily acquire image information using a graphic user interface, but also enables a user to intuitively input control commands using a touch panel. In other words, the new display is capable of receiving information as well as capable of displaying information.
- In addition, the new refrigerator includes a communication module for connecting to an external device (for example, a server connected to Internet).
- Refrigerators may be connected to Internet through a communication module, acquire a variety of information from different servers and provide a variety of services based on the acquired information. For example, through Internet, refrigerators may provide a variety of services such as Internet shopping as well as providing information related to foods such as information of food and recipes of food.
- As such, the refrigerator provides a variety of services to users through its display and communication modules.
- In addition, recently, an artificial intelligence (AI) technology has proliferated, and the AI technology is a system that a machine learns and determines itself and becomes intelligent unlike rule-based smart system. The AI system is a computer system that implements human-level intelligence and that a machine learns, determines, and becomes intelligent by itself unlike rule-based smart systems. As the AI system is used, a recognition rate is more increased and user preferences are more accurately understood. Therefore, rule-based smart systems are gradually being replaced by deep-learning based AI systems.
- AI technology includes a machine learning (e.g., deep learning) and element technologies that utilize the machine learning. The machine learning is an algorithm technology for autonomously categorizing and learning characteristics of input data. Element technologies are technologies that utilize the machine learning and may include linguistic understanding, visual understanding, reasoning/prediction, knowledge representation, and motion control.
- Various fields in which the AI technology is applied are as follows. Linguistic understanding is a technology for recognizing, applying/processing human languages/characters and includes natural language processing, machine translation, dialogue system, query response, speech recognition/synthesis, etc. Visual understanding is a technology for recognizing and processing objects in a manner similar to that of human vision and includes object recognition, object tracking, image searching, human recognition, scene understanding, space understanding, and image enhancement. Reasoning/prediction is a technology to determine information for logical reasoning and prediction and includes knowledge/probability-based reasoning, optimization prediction, preference-based planning, and recommendation. Knowledge representation is a technology for automating human experience information into knowledge data and includes knowledge building (data generation/categorization) and knowledge management (data utilization). Motion control is a technology for controlling autonomous driving of a vehicle and a motion of a robot and includes motion control (navigation, collision avoidance, driving), manipulation control (behavior control), etc.
- The present disclosure is directed to providing a refrigerator capable of receiving a command in a speech by using a speech recognition technology and capable of outputting a content in a speech.
- Further, the present disclosure is directed to providing a refrigerator capable of providing Internet shopping based on a command in a speech.
- Further, the present disclosure is directed to providing a refrigerator capable of identifying a plurality of users and providing a content appropriate for the identified user.
- One aspect of the present disclosure provides a refrigerator including a microphone, a display configured to display information according to a speech input via the microphone, and a controller configured to, when a first speech including a food name is recognized, display a list including information on a food having a name corresponding to the food name and an identification mark identifying the food, on the display and configured to, when a second speech referring to at least one marks among the marks, display food information indicated by the mark contained in the second speech, on the display.
- When the first speech is recognized, the controller may overlap the list in a card type user interface, with a user interface of an application displayed on the display.
- When the second speech is recognized, the controller may execute an application providing information of food indicated by a mark contained in the second speech so as to display food information, which is provided from the application, on the display.
- The refrigerator may further include a communication circuitry, and when the first speech is recognized, the controller may transmit data of the first speech to a server via the communication circuitry, and when the communication circuitry receives information on the list transmitted from the server, the controller may display the list on the display.
- The refrigerator may further include a communication circuitry, and when the second speech is recognized, the controller may transmit data of the second speech to a server via the communication circuitry, and when the communication circuitry receives food information transmitted from the server, the controller may display the food information on the display.
- The refrigerator may further include a communication circuitry, and when the first speech is recognized, the controller may transmit data of the first speech to a server via the communication circuitry, and when the communication circuitry receives analysis information on the first speech transmitted from the server, the controller may transmit a command according to the analysis information to an application, which is related to performance of an operation according to the analysis information, so as to allow the application to display the list.
- The refrigerator may further include a communication circuitry, and when the second speech is recognized, the controller may transmit data of the second speech to a server via the communication circuitry, and when the communication circuitry receives analysis information on the second speech transmitted from the server, the controller may transmit a command according to the analysis information to an application, which is related to performance of an operation according to the analysis information, so as to allow the application to display the food information.
- The refrigerator may further include a speaker configured to, when the list is displayed on the display due to the recognition of the first speech, output a speech indicating that the list is displayed, and configured to, when the food information is displayed on the display due to the recognition of the second speech, output a speech indicating that the food information is displayed.
- Another aspect of the present disclosure provides a refrigerator including a microphone, a display configured to display information according to a speech input via the microphone, a speaker configured to output a speech, and a controller configured to, when a speech is input via the microphone, execute a command indicated by the input speech, on an application displayed on the display, and configured to, when the controller is not able to execute the command indicated by the input speech, on the application, execute the command on other application according to a priority that is pre-determined about applications related to the command.
- When the controller is not able to execute the command indicated by the input speech, on the application, the controller may output a speech indicating that it is impossible to execute the command on the application, via the speaker.
- When the priority is not present, the controller may output a speech requesting confirmation of whether to execute the command on other application, via the speaker.
- When an answer to the request of the confirmation is input, the controller may execute a command indicated by the speech on an application indicated by the input answer.
- When the controller is not able to execute the command indicated by the input speech, on the application, the controller may select an application, in which the command is to be executed, according to the priority that is pre-determined about applications related to the command, and execute the command on the selected application.
- When a target, in which the command indicated by the speech is to be executed, is not contained in the speech input via the microphone, the controller may execute the command indicated by the input speech, on the application displayed on the display.
- Another aspect of the present disclosure provides a refrigerator including a microphone, a display configured to display information according to a speech input via the microphone, a speaker configured to output a speech, and a controller configured to, when a speech is input via the microphone and it is needed to identify a user for executing a command indicated by the speech, display a pre-registered user on the display, and configured to output a speech requesting of selecting a user among the displayed users, via the speaker.
- When a speech selecting at least one user among the displayed users is input, the controller may execute the command indicated by the speech, according to the selected user, and display a result thereof on the display.
- When it is needed to identify a user for executing a command indicated by the speech, the controller may display the pre-registered user and a mark identifying each user on the display.
- When a speech selecting at least one mark among the marks is input, the controller may execute a command indicated by the speech, according to a user indicated by the mark, and display a result thereof on the display.
- When a speech is input via the microphone and an expression indicating a user is contained in the speech, the controller may execute a command indicated by the speech, according to a user contained in the speech, and display a result thereof on the display.
- When a speech is input via the microphone, the controller may execute a command indicated by the speech, according to a user indicated by speech data matching with the input speech among pre-stored speech data, and display a result thereof on the display.
- An aspect of the present disclosure may provide a refrigerator that receives a command in a speech by using a speech recognition technology and is capable of outputting a content in a speech.
- An aspect of the present disclosure may provide a refrigerator that provides Internet shopping based on a command through a speech.
- An aspect of the present disclosure may provide a refrigerator that identifies a plurality of users and providing a content appropriate for the identified user.
- Another aspect of the present disclosure provides a refrigerator including a storage compartment configured to store food, a temperature detector configured to detect an internal temperature of the storage compartment, a cooler configured to supply cool air to the storage compartment, a microphone configured to receive a speech, a display configured to display information, at least one processor configured to be electrically connected to the temperature detector, the microphone, and the display; and a memory configured to be electrically connected to the at least one processor.
- The memory may store at least one instructions configured to, when a first speech including a food name is recognized via the microphone, allow the processor to display a food list, which includes food information corresponding to the food name and an identification mark identifying the food information, on the display, and configured to, when a second speech referring to the identification mark is recognized via the microphone, allow the processor to display food at least one piece of purchase information corresponding to the identification mark, on the display.
- The memory may store at least one instructions configured to, when the first speech including the food name is input via the microphone, allow the processor to acquire the food name by recognizing the first speech using a learning network model, which is trained using an artificial intelligence algorithm, and configured to allow the processor to display the food list including the food information corresponding to the food name, on the display. The learning network model may be trained by using a plurality of speeches and a plurality of words corresponding to the plurality of speeches.
- The memory may store at least one instructions configured to, when the first speech including the food name is input via the microphone, allow the processor to acquire the food name and information of user uttering the first speech by recognizing the first speech using a learning network model, which is trained using an artificial intelligence algorithm, and configured to allow the processor to display the food list including the food information, which corresponds to the food name and is related to the user information, on the display. The learning network model may be trained by using a plurality of speeches and a plurality of pieces of user information corresponding to the plurality of speeches.
- The memory may store at least one instructions configured to, when the first speech is recognized, allow the processor to identify whether a food related to the food name is placed in the storage compartment, and configured to, when the food is placed in the storage compartment, allow the processor to display information indicating that the food is placed in the storage compartment, on the display.
- The memory may store at least one instructions configured to, when the first speech is recognized, allow the processor to identify whether a food related to the food name is placed in the storage compartment, and configured to, when the food is not placed in the storage compartment, allow the processor to display the food list including the food information corresponding to the food name, on the display.
- The memory may store at least one instructions configured to, when the first speech is recognized, allow the processor to overlap the food list in a card type user interface, with a user interface of an application displayed on the display.
- The memory may store at least one instructions configured to, when the second speech is recognized, allow the processor to execute an application providing information of food indicated by a mark contained in the second speech, so as to display food information, which is provided from the application, on the display.
- The refrigerator may further include a communication circuitry, and the memory may store at least one instructions configured to, when the first speech is recognized, allow the processor to transmit data of the first speech to a server via the communication circuitry, and configured to, when the communication circuitry receives information on the food list transmitted from the server, allow the processor to display the food list on the display.
- The refrigerator may further include a communication circuitry, and the memory may store at least one instructions configured to, when the second speech is recognized, allow the processor to transmit data of the second speech to a server via the communication circuitry, and configured to, when the communication circuitry receives food information transmitted from the server, allow the processor to display the food information on the display.
- The refrigerator may further include a communication circuitry, and the memory may store at least one instructions configured to, when the first speech is recognized, allow the processor to transmit data of the first speech to a server via the communication circuitry, and configured to, when the communication circuitry receives analysis information on the first speech transmitted from the server, allow the processor to transmit a command according to the analysis information to an application, which is related to performance of an operation according to the analysis information, so as to allow the application to display the food list.
- The refrigerator may further include a communication circuitry, and the memory may store at least one instructions configured to, when the second speech is recognized, allow the processor to transmit data of the second speech to a server via the communication circuitry, and configured to, when the communication circuitry receives analysis information on the second speech transmitted from the server, allow the processor to transmit a command according to the analysis information to an application, which is related to performance of an operation according to the analysis information, so as to allow the application to display the food information.
- The refrigerator may further include a speaker, and the memory may store at least one instructions configured to, when the food list is displayed on the display due to the recognition of the first speech, allow the processor to output a speech indicating that the list is displayed, via the speaker, and configured to, when the food information is displayed on the display due to the recognition of the second speech, allow the processor to output a speech indicating that the food information is displayed, via the speaker.
- Another aspect of the present disclosure provides a refrigerator including a storage compartment configured to store food, a temperature detector configured to detect an internal temperature of the storage compartment, a cooler configured to supply cool air to the storage compartment, a microphone configured to receive a speech, a display configured to display information, at least one processor configured to be electrically connected to the temperature detector, the microphone, and the display, and a memory configured to be electrically connected to the at least one processor.
- The memory may store at least one instructions configured to, when a speech is input via the microphone, allow the processor to execute a command indicated by the input speech, on an application displayed on the display, and configured to, when the processor is not able to execute the command indicated by the input speech, on the application, allow the processor to execute the command on other application according to a priority that is pre-determined about applications related to the command.
- The refrigerator may further include a speaker, and the memory may store at least one instructions configured to, when the processor is not able to execute the command indicated by the input speech, on the application, allow the processor to output a speech indicating that it is impossible to execute the command on the application, via the speaker.
- The refrigerator may further include a speaker, and the memory may store at least one instructions configured to, when the priority is not present, allow the processor to output a speech requesting confirmation of whether to execute the command on other application, via the speaker.
- The memory may store at least one instructions configured to, when an answer to the request of the confirmation is input, allow the processor to execute a command indicated by the speech on an application indicated by the input answer.
- The memory may store at least one instructions configured to, when the processor is not able to execute the command indicated by the input speech, on the application, allow the processor to select an application, in which the command is to be executed, according to the priority that is pre-determined about applications related to the command, and configured to allow the processor to execute the command on the selected application.
- The memory may store at least one instructions configured to, when a target, in which the command indicated by the speech is to be executed, is not contained in the speech input via the microphone, to allow the processor to execute the command indicated by the input speech, on the application displayed on the display.
- Another aspect of the present disclosure provides a refrigerator including a storage compartment configured to store food, a temperature detector configured to detect an internal temperature of the storage compartment, a cooler configured to supply cool air to the storage compartment, a microphone configured to receive a speech, a display configured to display information, a speaker configured to output a speech, at least one processor configured to be electrically connected to the temperature detector, the microphone, and the display, and a memory configured to be electrically connected to the at least one processor.
- The memory may store at least one instructions configured to, when a speech is input via the microphone and it is needed to identify a user for executing a command indicated by the speech, allow the processor to display a pre-registered user on the display, and configured to output a speech requesting of selecting a user among the displayed user, via the speaker.
- The memory may store at least one instructions configured to, when a speech selecting at least one user among the displayed users is input, allow the processor to execute the command indicated by the speech, according to the selected user, and configured to allow the processor to display a result thereof on the display.
- The memory may store at least one instructions configured to, when it is needed to identify a user for executing a command indicated by the speech, allow the processor to display the pre-registered user and a mark identifying each user on the display.
- The memory may store at least one instructions configured to, when a speech selecting at least one marks among the marks is input, allow the processor to execute a command indicated by the speech, according to a user indicated by the mark, and configured to allow the processor to display a result thereof on the display.
- The memory may store at least one instructions configured to, when a speech is input via the microphone and an expression indicating a user is contained in the speech, allow the processor to execute a command indicated by the speech, according to a user contained in the speech, and configured to allow the processor to display a result thereof on the display.
- The memory may store at least one instructions configured to, when a speech is input via the microphone, allow the processor to execute a command indicated by the speech, according to a user indicated by speech data matching with the input speech among pre-stored speech data, and configured to allow the processor to display a result thereof on the display.
- Another aspect of the present disclosure provides an information display method of a refrigerator including receiving a first speech including a food name via a microphone, displaying a food list including food information corresponding to the food name and an identification mark identifying the food information, on a display installed on the front surface of the refrigerator, based on the recognition of the first speech, receiving a second speech referring to the identification mark via the microphone, and displaying at least one piece of purchase information of food corresponding to the identification mark, on the display, based on the recognition of the second speech.
- It is possible to provide a refrigerator capable of receiving a command in a speech by using a speech recognition technology and capable of outputting a content in a speech.
- It is possible to provide a refrigerator capable of providing Internet shopping based on a command in a speech.
- It is possible to provide a refrigerator capable of identifying a plurality of users and providing a content appropriate for the identified user.
-
FIG. 1 is a view of an appearance of a refrigerator according to one embodiment of the present disclosure. -
FIG. 2 is a front view of the refrigerator according to one embodiment of the present disclosure. -
FIG. 3 is a block diagram of a configuration of the refrigerator according to one embodiment of the present disclosure. -
FIG. 4 is a view of a display included in the refrigerator according to one embodiment of the present disclosure. -
FIG. 5 is a view of a home screen displayed on the display included in the refrigerator according to one embodiment of the present disclosure. -
FIGS. 6 and 7 are views of a speech recognition application displayed on the display included in the refrigerator according to one embodiment of the present disclosure. -
FIG. 8 is a view illustrating a communication with an external device through a communication circuitry included in the refrigerator according to one embodiment of the present disclosure. -
FIGS. 9 and 10 are views illustrating a communication between a server and the refrigerator according to one embodiment of the present disclosure. -
FIGS. 11A to 14 are views illustrating a method for a user to purchase food through the refrigerator according to one embodiment of the present disclosure. -
FIGS. 15 and 16 are views illustrating a method in which the refrigerator according to one embodiment of the present disclosure performs a command in accordance with a speech command. -
FIGS. 17 to 20 are views illustrating a method in which the refrigerator according to one embodiment of the present disclosure identifies a user and provides information corresponding to the identified user. -
FIG. 21A is a block diagram of a controller for data training and recognition according to one embodiment of the present disclosure. -
FIG. 21B is a detail block diagram of a data learner and a data recognizer according to one embodiment of the present disclosure. -
FIG. 22 is a flow chart of the refrigerator displaying information according to one embodiment of the present disclosure. -
FIG. 23 is a flow chart of a network system according to one embodiment of the present disclosure. -
FIG. 24 is a view illustrating a method in which a user purchases food via a refrigerator according to another embodiment of the present disclosure. - Embodiments described in the present disclosure and configurations shown in the drawings are merely examples of the embodiments of the present disclosure, and may be modified in various different ways at the time of filing of the present application to replace the embodiments and drawings of the present disclosure.
- The terms used herein are used to describe the embodiments and are not intended to limit and/or restrict the present disclosure.
- For example, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
- In this present disclosure, the terms “including”, “having”, and the like are used to specify features, numbers, steps, operations, elements, components, or combinations thereof, but do not preclude the presence or addition of one or more of the features, elements, steps, operations, elements, components, or combinations thereof.
- It will be understood that, although the terms first, second, third, etc., may be used herein to describe various elements, but elements are not limited by these terms. These terms are only used to distinguish one element from another element.
- In the following description, terms such as “unit”, “part”, “block”, “member” and “module” may indicate a unit for processing at least one function or operation. For example, the terms may indicate at least one process processed by at least one hardware such as Field Programmable Gate Array (FPGA), and Application Specific Integrated Circuit (ASIC), at least one software stored in a memory or a processor.
- In addition, the term “food” is used to refer to industrial products manufactured by humans or machines or products produced or hunted by a user.
- Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. The same reference numerals or signs shown in the drawings of the present disclosure indicate components or elements performing substantially the same function.
-
FIG. 1 is a view of an appearance of a refrigerator according to one embodiment of the present disclosure, andFIG. 2 is a front view of the refrigerator according to one embodiment of the present disclosure.FIG. 3 is a block diagram of a configuration of the refrigerator according to one embodiment of the present disclosure, andFIG. 4 is a view of a display included in the refrigerator according to one embodiment of the present disclosure. - As illustrated in
FIGS. 1, 2 and 3 , arefrigerator 1 may include abody 10 including an open front surface, and adoor 30 opening and closing the open front surface of thebody 10. Thebody 10 is provided with astorage compartment 20 having an open front surface to keep food in a refrigeration manner or a frozen manner. - The
body 10 may form an outer appearance of therefrigerator 1. Thebody 10 may include aninner case 11 forming astorage compartment 20 and anouter case 12 forming an appearance of the refrigerator by being coupled to the outside of theinner case 11. A heat insulating material (not shown) may be filled between theinner case 11 and theouter case 12 of thebody 10 to prevent leakage of the cooling air of thestorage compartment 20. - The
storage compartment 20 may be divided into a plurality of storage compartments by ahorizontal partition 21 and avertical partition 22. For example, as illustrated inFIG. 1 , thestorage compartment 20 may be divided into anupper storage compartment 20 a, a lowerfirst storage compartment 20 b, and a lowersecond storage compartment 20 c. Ashelf 23 on which food is placed, and aclosed container 24 in which food is stored in a sealing manner may be provided in thestorage compartment 20. - The
storage compartment 20 may be opened and closed by thedoor 30. For example, as illustrated inFIG. 1 , theupper storage compartment 20 a may be opened and closed by an upperfirst door 30 aa and an uppersecond door 30 ab. The lowerfirst storage compartment 20 b may be opened and closed by a lowerfirst door 30 b, and the lowersecond storage compartment 20 c may be opened and closed by a lowersecond door 30 c. - The
door 30 may be provided with ahandle 31 easily opening and closing thedoor 30. Thehandle 31 may be elongated in the vertical command between the upperfirst door 30 aa and the uppersecond door 30 ab, and between the lowerfirst door 30 b and the lowersecond door 30 c. Therefore, when thedoor 30 is closed, thehandle 31 may be seen as being integrally provided. - Further, the
refrigerator 1 may include at least one of adisplay 120, astorage 130, acommunication circuitry 140, adispenser 150, a cooler 150, atemperature detector 170, a audio 180, and amain controller 110. - The
display 120 may interact with a user. For example, thedisplay 120 may receive user input from a user and display an image according to the received user input. - The
display 120 may include adisplay panel 121 displaying an image, atouch panel 122 receiving user input and atouch panel controller 123 controlling/driving thedisplay panel 121 and thetouch panel 122. - The
display panel 121 may convert image data received from themain controller 110 into an optical image, which is viewed by a user, through thetouch screen controller 123. - The display panel 101 may employ cathode ray tube (CRT) display panel, liquid crystal display (LCD) panel, light emitting diode (LED) panel, organic light emitting diode (OLED) panel, plasma display panel (PDP), and field emission display (FED) panel. However, the display panel 101 is not limited thereto, and the display panel 101 may employ various display means visually displaying an optical image corresponding to image data.
- The
touch panel 122 may receive a user's touch input and transmit an electrical signal corresponding to the received touch input to thetouch screen controller 123. - Particularly, the
touch panel 122 detects a user touch on thetouch panel 122 and transmits an electrical signal corresponding to coordinates of the user touch point, to thetouch screen controller 123. As will be described below, thetouch screen controller 123 may acquire the coordinates of the user touch point based on the electrical signals received from thetouch panel 122. - In addition, the
touch panel 122 may be located on the front surface of thedisplay panel 121. In other words, thetouch panel 122 may be provided on a surface on which an image is displayed. Accordingly, thetouch panel 122 may be made of a transparent material to prevent the image displayed on thedisplay panel 121 from being distorted. - The
touch panel 122 may employ a resistance film type touch panel or a capacitive touch panel. However, thetouch panel 122 is not limited thereto, and thus the touch panel may employ a variety of input means detecting the user's touch or approach, and outputting an electrical signal corresponding to coordinates of the detected touch point or coordinates of approach point. - The
touch screen controller 123 may drive/control an operation of thedisplay panel 121 and thetouch panel 122. Particularly, thetouch screen controller 123 may drive thedisplay panel 121 to allow thedisplay panel 121 to display an optical image corresponding to the image data received from themain controller 110, and control thetouch panel 122 to allow the touch panel to detect the coordinates of the user touch point. - Depending on embodiments, the
touch screen controller 123 may identify the coordinates of the user touch point based on the electrical signal output from thetouch panel 122, and transmit the identified coordinates of the user touch point to themain controller 110. - Further, depending on embodiments, the
touch screen controller 123 may transmit the electrical signal output from thetouch panel 122 to themain controller 110 to allow themain controller 110 to identify the coordinates of the user touch point. - The touch-
screen controller 123 may include a memory (not shown) storing programs and data for controlling an operation of thedisplay panel 121 and thetouch panel 122, and a microprocessor (not shown) performing an operation for controlling the operation of thetouch panel 121 and thetouch panel 122 according to the program and data stored in the memory. In addition, the memory and the processor of thetouch screen controller 123 may be provided as a separate chip or a single chip. - The
display 120 may be installed in thedoor 30 for the user's convenience. For example, as illustrated inFIG. 2 , thedisplay 120 may be installed in the uppersecond door 30 ab. Hereinafter thedisplay 120 installed on the uppersecond door 30 ab will be described. However, the position of thedisplay 120 is not limited to the uppersecond door 30 ab. For example, thedispenser 150 may be installed on any position as long as a user can see, such as the upperfirst door 30 aa, the lowerfirst door 30 b, the lowersecond door 30 c, and theouter case 12 of thebody 10. - In addition, the
display 120 may be provided with a wake up function that is automatically activated when a user approaches within a predetermined range. For example, when the user approaches within the predetermined range, thedisplay 120 may be activated. In other words, thedisplay 120 may be turned on. On the other hand, when the user is outside the predetermined range, thedisplay 120 may be inactivated. In other words, thedisplay 120 may be turned off. - The
display 120 may display various screens or images. Screens or images displayed on thedisplay 120 will be described in detail below. - The
storage 130 may store control programs and control data for controlling the operation of therefrigerator 1 and various application programs and application data for performing various functions according to user input. For example, thestorage 130 may include an operating system (OS) program for managing the configuration and resources (software and hardware) included in therefrigerator 1, an image display application for displaying images stored in advance, a video play application for playing video stored in advance, a much application for playing a music, a radio application for playing a radio, a calendar application for managing a schedule, a memo application for storing a memo, an on-line shopping mall application for purchasing food on line, and a recipe application for providing a recipe. - Further, the
storage 130 may include non-volatile memory that does not lose program or data even when power is turned off. For example, thestorage 130 may include large capacity flash memory or solid state drive (SSD) 131. - The
communication circuitry 140 may transmit data to an external device or receive data from an external device under the control of themain controller 110. - The
communication circuitry 140 may include at least one ofcommunication modules communication circuitry 140 may include a WiFi (Wireless Fidelity: WiFi®)module 141 connecting to a local area network (LAN) via an access point, a Bluetooth (Bluetooth®)module 142 communicating with an external device in a one to one relationship or communicating with a small number of external device in a one to many relationship, and aZigBee module 143 forming a LAN among a plurality of electronic device (mainly home appliances). - Further, the plurality of
communication modules - Operations of the
refrigerator 1 through thecommunication circuitry 140 are described in more detail below. - The
dispenser 150 may discharge water or ice according to the user's input. In other words, the user may directly take out water or ice to the outside without opening thedoor 30 through thedispenser 150. - The
dispenser 150 may include adispenser lever 151 receiving a user's discharge command, adispenser nozzle 152 discharging water or ice, aflow path 153 guiding water from an external water source to thedispenser nozzle 152, afilter 154 purifying water being discharged, and adispenser display panel 155 displaying an operation state of thedispenser 150. - The
dispenser 150 may be installed outside thedoor 30 or thebody 10. For example, as illustrated inFIG. 2 , thedispenser 150 may be installed in the upperfirst door 30 aa. Hereinafter thedispenser 150 installed in the upperfirst door 30 aa will be described. However, the position of thedispenser 150 is not limited to the upperfirst door 30 aa. Therefore, thedispenser 150 may be installed on any position as long as a user can take out water or ice, such as the uppersecond door 30 ab, the lowerfirst door 30 b, the lowersecond door 30 c, and theouter case 12 of thebody 10. - For example, the
door 30 or theouter case 12 may be provided with acavity 150 a recessed inward of the refrigerator to form a space for taking out water or ice, and thecavity 150 a may be provided with thedispenser nozzle 152 and thedispenser lever 151. When the user pushes thedispenser lever 151, water or ice is discharged from thedispenser nozzle 152. - Particularly, when water is discharged through the
dispenser nozzle 152, water may flow from the external water source (not shown) to thedispenser nozzle 152 along theflow path 152. Further, the water may be purified by thefilter 153 while flowing to thedispenser nozzle 152. - At this time, the
filter 153 may be detachably installed in thebody 10 or thedoor 30 and thus when thefilter 153 is broken down, thefilter 153 may be replaced by a new filter. - The cooler 160 may supply cool air to the
storage compartment 20. - Particularly, the cooler 160 may maintain the temperature of the
storage compartment 20 within a predetermined range by using evaporation of the refrigerant. - The cooler 160 may include a
compressor 161 compressing gas refrigerant, acondenser 162 converting the compressed gas refrigerant into liquid refrigerant, anexpander 163 reducing the pressure of the liquid refrigerant, and anevaporator 164 converting the pressurized liquid refrigerant into a gaseous state. - Particularly, the cooler 160 may supply cool air to the
storage compartment 20 by using the phenomenon that the decompressed liquid refrigerant absorbs the thermal energy of the ambient air while being converted into the gas state. - However, the configuration of the cooler 160 is not limited to the
compressor 161, thecondenser 162, theexpander 163 and theevaporator 164. - For example, the cooler 160 may include a Peltier element using Peltier effect. The Peltier effect means that heat is generated in one of the metals and heat is absorbed in the other metal when a current flows through a contact surface where the metals are in contact with each other. The cooler 160 may supply cool air to the storage compartment 102 using a Peltier element.
- Alternatively, the cooler 160 may include a magnetic cooler using magnetocaloric effect. The magnetocaloric effect means that when a certain substance (magnetocaloric material) is magnetized, it releases heat, and when a certain substance (magnetocaloric material) is demagnetized, it absorbs heat. The cooler 160 may supply cool air to the
storage compartment 20 using a magnetic cooler. - The
temperature detector 170 may be placed inside thestorage compartment 20 to detect the temperature inside thestorage compartment 20. Thetemperature detector 170 may include a plurality oftemperature sensors 171 installed in the plurality of storage compartments 20 a, 20 b, and 20 c, respectively. In addition, each of the plurality oftemperature sensors 171 may include a thermistor in which an electrical resistance varies with temperature. - The audio 180 may include a
speaker 181 converting an electrical signal received from themain controller 110 into an acoustic signal and outputting the acoustic signal, and amicrophone 182 converting the acoustic signal into an electrical signal, and outputting the electrical signal into themain controller 110. - Based on the user input received via the
display 120 and/or the program and data stored in thestorage 130, themain controller 110 may control thedisplay 120, thestorage 130, thecommunication circuitry 140, thedispenser 150, the cooler 160, thetemperature detector 170 and the audio 180 contained in therefrigerator 1. - The
main controller 110 may include amicroprocessor 111 performing operations to control therefrigerator 1 and amemory 112 storing/memorizing programs and data related to the performance of the operation of themicroprocessor 111. - The
microprocessor 111 may load data stored/memorized in thememory 112 according to the program stored in thememory 112 and may perform an arithmetic operation or a logical operation on the loaded data. Further, themicroprocessor 111 may output the result of the arithmetic operation or the logical operation to thememory 112. - The
memory 112 may include a volatile memory that loses stored data when the power supply is stopped. The volatile memory may load programs and data from the above-describedstorage 130 and temporarily memorize the loaded data. In addition, the volatile memory may provide the memorized program and data to themicroprocessor 111, and memorize the data output from themicroprocessor 111. These volatile memories may include S-RAM, and D-RAM. - Further, the
memory 112 may include a non-volatile memory as needed. The non-volatile memory may preserve the memorized data when the power supply is stopped. The non-volatile memory may store firmware for managing and initializing various components contained in therefrigerator 1. The non-volatile may include Read Only Memory (ROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM) and flash memory. - In addition, the
main controller 110 may include a plurality ofmicroprocessors 111 and a plurality ofmemories 112. For example, themain controller 110 may include a first microprocessor and a first memory controlling thetemperature detector 170, thedispenser 150 and the cooler 160 of therefrigerator 1. Themain controller 110 may include a second microprocessor and a second memory controlling thedisplay 120, thestorage 130, thecommunication circuitry 140, and theaudio 180 of therefrigerator 1. - Although it has been described that the
microprocessor 111 and thememory 112 are functionally distinguished, themicroprocessor 111 and thememory 112 may be not physically distinguished from each other. For example, themicroprocessor 111 and thememory 112 may be implemented as separate chips or a single chip. - The
main controller 110 may control the overall operation of therefrigerator 1, and it may be assumed that the operation of therefrigerator 1 described below is performed under the control of themain controller 110. - Although it has been described that the
main controller 110, thestorage 130 and thecommunication circuitry 140 are functionally distinguished from each other in the above description, themain controller 110, thestorage 130, and thecommunication circuitry 140 may be not physically distinguished from each other. For example, themain controller 110, thestorage 130, and thecommunication circuitry 140 may be implemented as separate chips or a single chip. - Hereinbefore the
display 120, thestorage 130, thecommunication circuitry 140, thedispenser 150, the cooler 150, thetemperature detector 170, the audio 180 and themain controller 180 contained in therefrigerator 1 have been described, but a new configuration may be added or some configuration may be omitted as needed. -
FIG. 5 is a view of a home screen displayed on the display included in the refrigerator according to one embodiment of the present disclosure. - When the power is supplied to the
refrigerator 1, themain controller 110 may allow thedisplay 120 to display ahome screen 200 as illustrated inFIG. 5 . - A time/
date area 210 displaying a time and date, anoperational information area 220 displaying operation information of therefrigerator 1, and a plurality oflaunchers 230 for executing applications stored in thestorage 130 may be displayed on thehome screen 200. - Current time information and today's date information may be displayed on the time/
date area 210. Further, location information (e.g., name of country or city) of the location where therefrigerator 1 is located may be displayed on the time/date area 210. - A
storage compartment map 221 related to the operation of the plurality of storage compartments 20 a, 20 b, and 20 c contained in therefrigerator 1 may be displayed on theoperation information area 220. - Information related to the operation of the plurality of storage compartments 20 a, 20 b, and 20 c contained in the
refrigerator 1 may be displayed on thestorage compartment map 221. For example, as illustrated inFIG. 5 , theupper storage compartment 20 a, the lowerfirst storage compartment 20 b, and the lowersecond storage compartment 20 c may be partitioned and displayed on thestorage compartment map 221, and a target temperature of the firstlower storage compartment 20 a, a target temperature of the lowerfirst storage compartment 20 b, and a target temperature of the lowersecond storage compartment 20 c may be displayed on thestorage compartment map 221. - When the user touches the region indicating the
respective storage compartment main controller 110 may display an image for indicating the target temperature of eachstorage compartment display 120. For example, when the user touches the region indicating theupper storage compartment 20 a in thestorage compartment map 221, an image for setting the target temperature of theupper storage compartment 20 a may be displayed on thedisplay 120. - Further, a
timer setting icon 222 and arefrigerator setting icon 223 for executing an application controlling the operation of therefrigerator 1 may be displayed on theoperational information area 220. - Based on the
timer setting icon 222 being touched by the user, a timer setting screen for setting a target time of the timer may be displayed on thedisplay 120. For example, the user can input a time at which an alarm will be output or a time interval until an alarm will be output, via the timer setting image. Therefrigerator 1 may output the alarm at the time input by the user, or therefrigerator 1 may output the alarm at the time when the time interval input by the user elapses. - Based on the
refrigerator setting icon 223 being touched by the user, themain controller 110 may display an operation setting screen for inputting a setting value for controlling the operation of therefrigerator 1, on thedisplay 120. For example, via the operation setting screen, the user can set the target temperature of each of the plurality of storage compartments 20 a, 20 b, and 20 c contained in therefrigerator 1, and set the selection between the water and the ice to be discharged through thedispenser 150. - Further, the plurality of
launchers 230 for executing applications stored in thestorage 130 may be displayed on thehome screen 200. - For example, an
album launcher 231 for executing an album application displaying pictures stored in thestorage 130, a recipe launcher 232 for executing an recipe application providing food recipes, and ascreen setting launcher 233 for executing a screen setting application controlling the operation of thedisplay 120 may be displayed on thehome screen 200. - In addition, a home
appliance control launcher 234 for executing a home appliance control application controlling various home appliances through therefrigerator 1, a speechoutput setting launcher 235 for setting an operation of a speech output application outputting various contents in a speech, and anonline shopping launcher 236 for executing a shopping application for online shopping may be displayed on thehome screen 200. - As mentioned above, on the
home screen 200 of therefrigerator 1, main information related to the operation of therefrigerator 1 and launchers for executing the variety of applications may be displayed. - However, the drawing shown in
FIG. 5 is merely an example of thehome screen 200, and thus the refrigerator 100 may display various types of home screen according to the user's settings. Further, information and launchers displayed on the home screen are not limited toFIG. 5 . -
FIG. 6 illustrates a user interface (UI) of a speech recognition displayed on thedisplay 120 of the refrigerator according to one embodiment of the present disclosure. Hereinafter the main controller is referred to as a controller for convenience. - The
controller 110 may display a speech recognition user interface (UI) 250 on thedisplay 120 based on a predetermined wake word being input via themicrophone 182. Thespeech recognition UI 250 having a configuration illustrated inFIG. 6 may be displayed on thedisplay 120. The wake word may be composed of a predetermined word or a combination of words, and may be changed to an expression desired by the user. - The
controller 110 may execute a speech recognition function using a wake word uttered by the user, as described above. Alternatively, as illustrated inFIG. 7 , thecontroller 110 may execute a speech recognition function when a microphone-shapedbutton 271 displayed on a notification UI is touched. - The shape of the launcher indicating a button for executing the speech recognition function is not limited to the shape of the microphone and thus may be implemented in a variety of images.
- The notification UI may be displayed on the
display 120 as illustrated inFIG. 7 , when a touch gesture is input. The touch gesture represents swiping upward while a specific point at the bottom of thedisplay 120 is touched. The launcher for executing the speech recognition function may be displayed on the notification UI or thehome screen 200 of thedisplay 120. - The
controller 110 may identify the user intention by recognizing and analyzing the user speech composed of natural language by using the speech recognition function, and may perform a command according to the user intention. Further, by outputting a process or result of a command according to a user intention, as a speech, thecontroller 110 may allow the user to acoustically recognize the process or the result of the speech command. - When the speech recognition function is activated, the
speech recognition UI 250 may be displayed on thedisplay 120, as illustrated inFIGS. 6 and 7 . Thespeech recognition UI 250 may include afirst area 252 on which a word or a sentence, which is output as a speech by using a text to speech (TTS) function, is displayed, asecond area 254 on which a state of the speech recognition function is displayed, athird area 260 indicating that a speech is outputting through the TTS function, asetting object 256 for setting the speech recognition function and ahelp object 258 providing a help for the use of the speech recognition function. - The
second area 254 on which a state of the speech recognition function is displayed may display a state in which the user speech is inputting, a state for waiting the input of the user speech, or a state in which a process according to the input of the user speech is processing, in a text manner. For example, the state in which the user speech is inputting may be displayed as “listening”, the state for waiting may be displayed as “standby” and the state in which the process is processing may be displayed as “processing”. The above-mentioned English text is merely an example and thus the state may be displayed in Korean text or may be displayed as an image rather than text. - As illustrated in
FIGS. 6 and 7 , thespeech recognition UI 250 may be displayed as a card type UI that is pop-up on thehome screen 200 or displayed on an entire screen of thedisplay 120. - Based on the user speech being input through the
microphone 182, thecontroller 110 may transmit speech data to a first server SV1 described later, and execute a command by receiving information, which is acquired by analyzing the speech, from the first server SV1. -
FIG. 8 is a view illustrating a communication with an external device through a communication circuitry included in the refrigerator according to one embodiment of the present disclosure, andFIGS. 9 and 10 are views illustrating a communication between a server and the refrigerator according to one embodiment of the present disclosure. - The
refrigerator 1 may communicate with various electronic devices as well as theservers SV 1 andSV 2 through thecommunication circuitry 140. - For example, as illustrated in
FIG. 8 , therefrigerator 1 may be connected to an access point (AP) via thecommunication circuitry 140. Particularly, therefrigerator 1 may be connected to the access point (AP) by using a wireless communication standard such as Wi-Fi™ (IEEE 802.11), Bluetooth™ (IEEE 802.15.1) and Zigbee (IEEE 802.15.4). - The AP may be referred to as “hub”, “router”, “switch”, or “gateway” and the AP may be connected to a wide area network (WAN).
- As well as the
refrigerator 1, various electronic devices such as anair conditioner 2, awashing machine 3, anoven 4, amicrowave oven 5, arobot cleaner 6, asecurity camera 7, alight 8 and atelevision 9 may be connected to the AP. The electronic devices 1-9 connected to the AP may form a local area network (LAN). - The AP may connect the LAN formed by the electronic devices 1-9 connected to the AP, to the WAN such as Internet.
- The first server SV1 providing information, which is acquired by analyzing speech data, to the refrigerator, and the second server SV2, which is operated by a provider providing information through an application installed in the refrigerator, may be connected to the WAN. For example, the second server SV2 may include a server (hereinafter referred to as “store server”) selling food online via a shopping application such as a store application provided in the refrigerator, a server (hereinafter referred to as “weather server”) providing information to a weather application, a server (hereinafter referred to as “recipe server”) providing information on recipes, to a recipe application, and a server (hereinafter referred to as “music server”) providing information on music, to a music application.
- Further, a mobile terminal (MT) may be connected to the WAN. The MT may be directly connected to the WAN or may be connected to the WAN through the AP according to the location of the MT. For example, when the MT is located close to the AP, the MT may be connected to the WAN through the AP. When the MT is located far from the AP, the MT may be directly connected to the WAN through a mobile communication service provided by a mobile communication service provider.
- Via the AP, the
refrigerator 1 may transmit data to the first server SV1 and/or the second server SV2, and receive data from the first server SV1 and/or the second server SV2. - For example, via the AP, the
refrigerator 1 may transmit data of user speech input via themicrophone 182, to the first server SV1 and the first server SV1 may transmit analysis information including the user intention, which is acquired by analyzing the speech data, to the refrigerator. - Via the AP, the
refrigerator 1 may transmit the analyzed information of the user speech data to the second server SV2 and may receive information, which is food information related to a certain application such as the store application, from the second server SV2. - As described above, the
refrigerator 1 may communicate with the first server SV1 and receive the analysis information of the user speech data via the first server SV1. Therefrigerator 1 may communicate with the second server SV2 and receive the information related to the certain application via the second server SV2. Hereinafter the communication between the refrigerator and the server will be described in more detail with reference toFIGS. 9 and 10 . - As illustrated in
FIG. 9 , therefrigerator 1 transmits speech data to the first server SV1 when a speech command being uttered from the user is input (1000). Based on the speech data being received, the first server SV1 may derive the user intention by analyzing the speech data (1010). - Based on user speech command being input via the
microphone 182, thecontroller 110 of therefrigerator 1 converts an analog speech signal into speech data that is a digital signal and transmits the speech data to the first server SV1 through thecommunication circuitry 140. - The
controller 110 of therefrigerator 1 may convert an analog speech signal into a digital speech signal in a pulse code modulation method. For example, based on a speech command, such as “let me know today weather” being input, thecontroller 110 of therefrigerator 1 may convert the speech command into a digital signal and transmit the digital signal to the first server SV1. - The first server SV1 may include an automatic speech recognition (ASR) portion configured to remove a noise by performing a pre-processing on the speech data, which is transmitted from the
refrigerator 1, and configured to convert the speech data into a text by analyzing the speech data, and a natural language understanding (NLU) portion configured to identify a user intention based on the text, which is acquired by the conversion by the automatic speech recognition portion. Alternatively, the first server SV1 may include a learning network model that is trained using an artificial intelligence algorithm. In this case, the first server SV1 may identify (or recognize, estimate, infer, predict) the user intention by applying speech data transmitted from therefrigerator 1 to the learning network model. - For example, based on the speech data, the automatic speech recognition portion and the natural language understanding portion of the first server SV1 may identify the user intention as requiring information about weather in the area where the current user is located.
- The first server SV1 may transmit the analysis information of the speech data to the second server SV2 (1020), and the second server SV2 may transmit information, which is requested by the speech data according to the analysis information of the speech data, to the first server SV1 (1030). The first server SV1 may transmit the information, which is transmitted from the second server SV2, to the refrigerator 1 (1040).
- Based on the user intention contained in the speech data, the first server SV1 selects the second server SV2 providing information to an application related to the user intention, and the first server SV1 transmits the analysis information of the speech data to the selected second server SV2.
- For example, based on the user intention contained in the speech data, which is identified as the requirement for information on the weather of the area where the user is placed, the first server SV1 may transmit the analysis information of the speech data to the weather server providing information to the weather application among the second servers SV2.
- The second server SV2 may search for information matching with the user intention, based on the analysis information of the speech data, and transmit the searched information to the first server SV1. For example, the weather server may generate the current weather information of the area where the user is located, based on the analysis information of the speech data, and transmit the generated weather information to the first server SV1. The first server SV1 converts the information transmitted from the second server SV2 into a JavaScript Object Notation (JSON) file format and transmits the JSON file format to the
refrigerator 1. - The
refrigerator 1 displays the information transmitted from thefirst server SV 1 on thedisplay 120 as a card type user interface (UI), and outputs the information in a speech (1050). - Based on the information in the JSON file format, which is transmitted from the first server SV1 and received by the
communication circuitry 140 of therefrigerator 1, thecontroller 110 of therefrigerator 1 may display the received information on thedisplay 120 as the card type UI, and output the received information in a speech by using the TTS function. - For example, based on the information in the JSON file format, which is about the current weather of the user's location and transmitted from the first server SV1, the
controller 110 of therefrigerator 1 may display information including the user location, time and weather, on thedisplay 120 as the card type UI. Thecontroller 110 may output the weather information displayed in the card type UI, as a speech such as “Today's current weather in ** street is 20 degrees Celsius, humidity is 50 percent and clear without clouds”. - Meanwhile, referring to
FIG. 10 , therefrigerator 1 transmits the speech data to the first server SV1 based on the speech command uttered by the user (1100). Based on the speech data being received, the first server SV1 may derive the user intention by analyzing the speech data (1110). - Based on a user speech command being input via the
microphone 182, thecontroller 110 of therefrigerator 1 converts an analog speech signal into speech data that is a digital signal, and transmits the speech data to the first server SV1 through thecommunication circuitry 140. - The
controller 110 of therefrigerator 1 may convert an analog speech signal into a digital speech signal in a pulse code modulation method. For example, based on a speech command, such as “play next song” being input, thecontroller 110 of therefrigerator 1 may convert the speech command into a digital signal and transmit the digital signal to the first server SV1. In addition, based on a speech command, such as “let me know apple pie recipe” being input, thecontroller 110 of therefrigerator 1 may convert the speech command into a digital signal and transmit the digital signal to the first server SV1. - The first server SV1 may include an automatic speech recognition (ASR) portion configured to remove a noise by performing a pre-processing on the speech data, which is transmitted from the
refrigerator 1, and configured to convert the speech data into a text by analyzing the speech data, and a natural language understanding (NLU) portion configured to identify a user intention based on the text, which is acquired by the conversion by the automatic speech recognition portion. Alternatively, the first server SV1 may include a learning network model that is trained using an artificial intelligence algorithm. In this case, the first server SV1 may identify (or recognize, estimate, infer, predict) the user intention by applying speech data transmitted from therefrigerator 1 to the learning network model. - For example, based on the speech data, the automatic speech recognition portion and the natural language understanding portion of the first server SV1 may identify the user intention as asking to play the next song to the song currently being played. In addition, based on the speech data, the automatic speech recognition portion and the natural language understanding portion of the first server SV1 may identify the user intention as asking an apple pie recipe.
- The first server SV1 may transmit the analysis information of the speech data to the refrigerator 1 (1120), and the
refrigerator 1 may transmit the analysis information of the speech data to the second server SV2 (1130). The second server SV2 may transmit information, which is requested by the speech data according to the analysis information of the speech data, to the refrigerator 1 (1140). - When the user intention is identified by analyzing the speech data, the first server SV1 converts the analysis information of the user speech data into a JSON file format, and transmits the JONS file format to the
refrigerator 1. - When the
communication circuitry 140 of therefrigerator 1 receives the analysis information in the JONS file format transmitted from the first server SV1, thecontroller 110 of therefrigerator 1 may select an application based on the analysis information and output the analysis information to the selected application. - For example, when the user intention is identified as a request for playing the next song to the song currently played, based on the analysis information in the JONS file format, the
controller 110 of therefrigerator 1 may select the music application as an application capable of performing a function related to the user intention. Thecontroller 110 may transmit a command for requesting to play the next song, to the selected music application. - For example, when the user intention is identified as a request for an apple pie recipe, based on the analysis information in the JONS file format, the
controller 110 of therefrigerator 1 may select the recipe application as an application capable of performing a function related to the user intention. Thecontroller 110 may transmit a command for requesting information on the apple pie recipe, to the selected recipe application. - Based on the analysis information being output to the selected application, the selected application may transmit the analysis information to the second server SV2 providing information. The second server SV2 may search for information meeting the user intention based on the analysis information of the speech data and transmit the searched information to the
refrigerator 1. - For example, upon receiving the analysis information output from the
controller 110, the music application may request information on the next song to the second server SV2 providing information to the music application, which is the music server. - The music server may select the next song to the currently played song based on the analysis information of the speech data, and provide information on the selected next song, to the music application.
- In addition, upon receiving the analysis information being output from the
controller 110, the recipe application may request information on the apple pie recipe to the second server SV2, which is configured to provide information to the recipe application and corresponds to the recipe server. The recipe server may provide a pre-stored apple pie recipe to the recipe application based on the analysis information of the speech data or search for other apple pie recipe and provide the searched recipe to the recipe application. - Based on the information transmitted from the second server SV2 being received, the
refrigerator 1 may execute the related application according to the received information (1150). - When the
communication circuitry 140 of therefrigerator 1 receives the information transmitted from the second server SV2, thecontroller 110 may execute the selected application to provide information corresponding to the user speech according to the information transmitted from the second server SV2. - For example, the
controller 110 executes the music application upon receiving the information on the next song being transmitted from the music server. The music application may stop playing the current song and play the next song according to the information on the next song transmitted from the music server. A music application UI may be displayed on thedisplay 120 of therefrigerator 1 and thus the information on the play of the next song may be displayed. The output of the currently played song may be stopped and the next song may be output through thespeaker 181 of therefrigerator 1. - In addition, the
controller 110 executes the recipe application upon receiving the information on the apple pie recipe being transmitted from the recipe server. A recipe application UI may be displayed on thedisplay 120 of therefrigerator 1 and thus the apple pie recipe may be displayed. A speech for reading the apple pie recipe may be output through thespeaker 181 of therefrigerator 1. - As illustrated in
FIG. 9 , the information on the user speech may be received via the communication between the first server SV1 and the second server SV2. As illustrated inFIG. 10 , the information on the user speech may be received via the communication between therefrigerator 1 and the second server SV2. -
FIGS. 11A to 14 are views illustrating a method for a user to purchase food through the refrigerator according to one embodiment of the present disclosure. - As illustrated in
FIG. 11A , a speech for ordering a certain food with a wake word may be input via the microphone 182 (900). Based on the wake word being recognized, thecontroller 110 may analyze the user speech by executing the speech recognition function, receive a food list contained in theuser speech 300 from the related application (e.g., store application), and display the food list in the card type UI on the display 120 (901). - The
refrigerator 1 may analyze the user speech through the communication with the first server SV1 by the method illustrated inFIG. 9 , so as to identify the user intention, and receive information, which is related to the user intention (e.g., food list) and transmitted from the second server SV2, via the first server SV1. The received information may be used as an input value of an application for displaying a food list, and thus thefood list 300 may be displayed in the card type UI as illustrated inFIG. 11A . For example, the result of the analysis of the user speech may be a food name. The food name may include not only the exact name of the food but also an idiom or slang referring to the food, a part of the food name, and a name similar to the food name. Further, the food name may also be an alias that is uttered or registered by the user of therefrigerator 1. - The
food list 300 may be displayed in the card type UI at the bottom of thespeech recognition UI 250. Thefood list 300 may include arepresentative image 314 representing the food,food information 316 including the manufacturer, the name of the food, the amount, the quantity and the price, and anidentification mark 312, which is configured to distinguish the corresponding food from other food and is displayed in figures. - Since a mark capable of distinguishing a plurality of food from each other is sufficient to be the identification mark, the identification mark may be displayed in figures or characters.
- A
tab 318 may be displayed at the bottom of thefood list 300, and thetab 318 may be selected when the user wants to find more food than the food contained in thefood list 300. Thetab 318 may be displayed in a text “see more food at store*” indicating the function of thetab 318. When the user touches thetab 318, the store application providing the food information may be executed and thus the food information may be displayed on thedisplay 120. - Through the
speaker 181, thecontroller 110 may output thefood list 300 and a speech requesting confirmation about the selection of the food among the food contained in thefood list 300. On thethird area 260 of thespeech recognition UI 250, thecontroller 110 may display a microphone-shaped image indicating that the speech is outputting through thespeaker 181. Further, the speech output through thespeaker 181 may be displayed in the text on thefirst area 250 of the speech recognition UI. - In various embodiments, the
controller 110 may recognize a first speech including the food name via themicrophone 182. For example, thecontroller 110 may convert the first input speech into a digital speech signal. Thecontroller 110 may recognize the digital speech signal through the learning network model, which is trained using the artificial intelligence algorithm, thereby acquiring the food name corresponding to the first speech. The food name contained in the first speech and the food name acquired by the recognition may be the same or different from each other. For example, the food name contained in the first speech may be an idiomatic word, an alias, or a part of the food name, but the food name acquired by the recognition may be the full name, the sale name, or the trademark name. - The learning network model recognizing the first speech may be stored in the
storage 130 or in the first server SV1 recognizing first speech data by receiving the first speech data. - When the leaning network model is stored in the first server SV1, the
controller 110 may transmit the first speech, which is converted into the digital speech signal, to the first server SV1. The first server SV1 may recognize (or estimate, infer, predict, identify) the food name corresponding to the first speech by applying the first speech as an input value to the learning network model that is trained using the artificial intelligence algorithm. Based on the result of the recognition, the first server SV1 may transmit the food name corresponding to the first speech to thecontroller 110. Alternatively, when the leaning network model is stored in thestorage 130, thecontroller 110 may recognize the food name corresponding to the first speech by applying the first speech, which is converted into the digital speech signal, to the learning network model stored in thestorage 130. - When the food name is acquired by using the learning network model, the
controller 110 may display thefood list 300, which includesfood information 316 on the acquired food name and theidentification mark 312 for distinguishing the food from other food, on thedisplay 120. - Next, through the
microphone 182, thecontroller 110 may recognize a second speech referring to theidentification mark 312 of the food list. Thecontroller 110 may convert the second input speech into a digital speech signal. Thecontroller 110 may acquire an identification mark corresponding to the second speech by recognizing the digital speech signal through the learning network model that is trained using the artificial intelligence algorithm. The learning network model recognizing the second speech may be stored in thestorage 130 or in the first server SV1 analyzing speech data. - When the leaning network model is stored in the first server SV1, the
controller 110 may transmit the second speech, which is converted into the digital speech signal, to the first server SV1. The first server SV1 may recognize (or estimate, infer, predict, identify) the identification mark corresponding to the second speech by applying the second speech as an input value to the learning network model that is trained using the artificial intelligence algorithm. Based on the result of the recognition, the first server SV1 may transmit the identification mark corresponding to the second speech to thecontroller 110. Alternatively, when the leaning network model is stored in thestorage 130, thecontroller 110 may recognize the identification mark corresponding to the second speech by applying the second speech, which is converted into the digital speech signal, to the learning network model stored in thestorage 130. - When the identification mark is recognized through the learning network model, the
controller 110 may display at least one piece of food purchase information represented by theidentification mark 312 on thedisplay 120. -
FIG. 11B is a view illustrating a method for a user to purchase food through a refrigerator according to another embodiment of the present disclosure. - As illustrated in
FIG. 11B , a speech for ordering a certain food may be input via a microphone 182 (950). - A
controller 110 may recognize a first speech including a food name. As described above, thecontroller 110 may recognize the first speech by applying the first speech, which is input through themicrophone 182, to the learning network model that is trained using the artificial intelligence algorithm. - Based on the first speech being recognized, the
controller 110 may identify whether a food 951-1, which is related to the food name corresponding to the first speech, is placed in the storage compartment 20 (951). The food related to the food name may be a food having a food name as at least a part of the name, or a food having the name similar with the food, or a food substitutable to the food. - For example, the
refrigerator 1 may store storage information list, which includes the name of the foods placed in thestorage compartment 20, in thestorage 130. For example, the food name of the storage information list may be generated by the user input, when the user stores the food in thestorage compartment 20 of therefrigerator 1. Particularly, information, which is input in the speech or in the text by the user, may be stored as the food names. Alternatively, the food name contained in the identification information may be stored as the food name by tagging the identification information of the food (e.g., bar code) by the user. Alternatively, the food names may be generated by recognizing an image of thestorage compartment 10 captured by the camera installed in therefrigerator 1. For example, therefrigerator 1 may recognize the food name by applying the image, which is captured by the camera, to the learning network model, which is trained using the artificial intelligence algorithm, and store the recognized food name. - The
refrigerator 1 may search for whether the food name corresponding to the first speech is on the storage information list, and identify whether a food related to the food name is placed in thestorage compartment 20. - When it is identified that the food 951-1 related to the food name is placed in the
storage compartment 20 based on the result of identification of whether the food 951-1 related to the food name corresponding to the first speech is placed in thestorage compartment 20, thecontroller 110 may allow thedisplay 120 to display information indicating the presence of the food 951-1 (952). Information indicating the presence of the food may include at least one of a video or image 952-1, which is acquired by capturing the food, a notification text 952-2 (e.g., “food similar to what you are looking for is in the refrigerator”) indicating the presence of the food, or a notification sound indicating the presence of the food. - On the other hand, the user may want to order additional food. For example, on the
display 120 on which the information indicating the presence of the food 951-1 is displayed, an UI (not shown) for ordering an additional food may be displayed together. In response to the user input for selecting the UI (not shown), thecontroller 110 may allow thedisplay 120 to display thefood list 300, as illustrated inFIG. 11A . -
FIG. 11C is a view illustrating a method for a user to purchase food through a refrigerator according to yet another embodiment of the present disclosure. - As illustrated in
FIG. 11C , a speech for ordering a certain food may be input via a microphone 182 (960). - A
controller 110 may recognize a first speech including a food name. As described above, thecontroller 110 may recognize the first speech by applying the first speech, which is input through themicrophone 182, to the learning network model that is trained using the artificial intelligence algorithm. - Based on the first speech being recognized, the
controller 110 may identify whether a food, which is related to the food name corresponding to the first speech, is placed in a storage compartment 20 (961). For example, thecontroller 110 may identify whether a food related to the food name is placed in thestorage compartment 20, based on image of thestorage compartment 20 captured by the camera installed in therefrigerator 1. Alternatively, storage information list about the foods placed in thestorage compartment 20 may be stored in thestorage 130 of therefrigerator 1. In this case, therefrigerator 1 may search for whether the food name is on the storage information list, and identify whether the food related to the food name is placed in thestorage compartment 20. - When it is identified that the food related to the food name is not placed in the
storage compartment 20 based on the result of identification of whether the food related to the food name corresponding to the first speech is placed in thestorage compartment 20, thecontroller 110 may display a food list 962-1 including food information having a name corresponding to the food name and a mark for identifying the food information, on thedisplay 120 in order to purchase the food (962). The food list 962-1 may be displayed in the card type UI at the bottom of the speech recognition UI. -
FIG. 11D is a view illustrating a method for a user to purchase food through a refrigerator according to yet another embodiment of the present disclosure. - As illustrated in
FIG. 11D , a speech for ordering a certain food may be input via a microphone 182 (970). - The
controller 110 may recognize a first speech including a food name. As described above, thecontroller 110 may recognize the first speech by applying the first speech, which is input through themicrophone 182, to the learning network model that is trained using the artificial intelligence algorithm. In addition, thecontroller 110 may recognize a user uttering the first speech by applying the first speech, which is input through themicrophone 182, to the learning network model that is trained using the artificial intelligence algorithm, and acquire user information on the user uttering the first speech. - For example, when the leaning network model is stored in the first server SV1, the
controller 110 may transmit the first speech, which is converted into the digital speech signal, to the first server SV1. The first server SV1 may recognize (or estimate, infer, predict, identify) the user uttering the first speech by applying the first speech as an input value to the learning network model that is trained using the artificial intelligence algorithm. The first server SV1 may transmit the user information on the user uttering the first speech to thecontroller 110. - Alternatively, when the leaning network model is stored in the
storage 130, thecontroller 110 may recognize the user uttering the first speech by applying the first speech, which is converted into the digital speech signal, to the learning network model stored in thestorage 130. - When the food name and the user information are acquired based on the result of the learning network model, the
controller 110 may acquire food information preferred by the user uttering the first speech related to the food name (971). For example, when a large number of users use therefrigerator 1, each user may have their own preferred food depending on the manufacturer of the food, the type of food, the capacity of the food, and the place where the food is sold even the foods have the same name. Accordingly, thestorage 130 of therefrigerator 1 may store a user-specific food information list 971-1 in which information of preferred food for each user is registered with respect to the same name food. The user-specific food information may be determined based on a purchase history about the purchase by each user, an input history directly registered by the user, and food information history corresponding to the identification mark selected from the identification marks corresponding to the food name. In addition, the user-specific food information list may store the food information according to the priority preferred for each user. For example, the food information, in which the priority is selected in order of the user's purchase history when a higher priority is given to large purchase history, may be stored. - The
controller 110 may acquire the food information preferred by a specific user based on the food information list stored in the storage 130 (971). Alternatively, thecontroller 110 may acquire the food information preferred by a specific user by applying the food name and the user information to the learning network model that is trained using the artificial intelligence algorithm. In this case, the learning network model may be a learning network model that is trained by using the above mentioned purchase history, input history, and food information history corresponding to the selected identification mark. - When the food name and the food information, which is preferred by the user uttering the first speech, are acquired, the
controller 110 may display a food list 972-1 including the information of the food preferred by the user and an identification mark for distinguishing the food from other food preferred by the user, on thedisplay 120. On the food list 972-1, the information of the food preferred by the user may be arranged in order of the user preference. - As illustrated in
FIG. 12 , the user can utter a speech for selecting a certain food in response to a speech for requesting confirmation for selecting a food among foods contained in the food list (902). For example, the user can utter an identification mark for distinguishing foods, as a speech. - In order to select the first food on the food list, the user may select the identification mark by uttering “
number 1” or “first thing” (903 and 904). - The
controller 110 may analyze the user speech through the speech recognition function. As illustrated inFIG. 10 , thecontroller 110 may transmit the user speech data to the first server SV1 and receive the analyzed information of the speech data from the first server SV1, thereby identifying the user intention. Alternatively, thecontroller 110 of therefrigerator 1 itself may analyze the user speech using the speech recognition function and identify the user intention. - The
controller 110 may search for the food name corresponding to the identification mark uttered by the user (905), and display the searched food. As illustrated inFIG. 13 , thecontroller 110 may execute the store application and display an UI including purchase information for purchasing the selected food, on the display 120 (906). - The
controller 110 may output the analyzed information of the user speech data to the store application through the method illustrated inFIG. 11 and when receiving the food information from the second server SV2, thecontroller 110 may display the UI on thedisplay 120 by executing the store application, as illustrated inFIG. 13 . - Referring to
FIG. 13 , afood confirmation UI 350 indicating specific information on a food selected by the user may be displayed. The food confirmation UI may include arepresentative image region 354 representing a food, afood information area 356 including a manufacturer, a food name, a capacity, a quantity and a price, aquantity control area 358 for controlling the quantity of food, and afood purchase area 360 including a cart tab for putting a food in the cart, a buy now tab for immediately purchasing a food, a gift tab for presenting a food, and a tab for bookmarking the selected food. The number and arrangement of areas contained the food purchase UI, as illustrated inFIG. 13 is merely an example and thus may include other numbers, arrangements, or other contents. - As illustrated in
FIG. 14 , the user may say “put it to the cart” to put the food, which is displayed on the food confirmation UI, in the cart (907). The user may say “buy it” to immediately purchase the food, which is displayed on the food confirmation UI (912). - When a function of putting the food to the cart is performed normally in response to the user utterance “put it to the cart” (yes in 908), the
controller 110 may display the speech recognition UI on thedisplay 120 and display a text “add the selected food to the cart” on the speech recognition UI, as illustrated inFIG. 14 (909). In addition, thecontroller 110 may output the text, which is displayed on the speech recognition UI, via thespeaker 181 by a speech using the TTS function. The speech recognition UI may be displayed in the card type UI on the store application screen. - When the text and the speech indicating that the food is added normally to the cart is output, the store application may display a cart screen on the display 120 (911).
- When an error occurs in the function that is to put the food to the cart in response to the user utterance “put it to the cart”, (no in 908), the
controller 110 may display the speech recognition UI on thedisplay 120 and display a text “an error occurs in adding the food to the cart. Please check the contents and try again” on the speech recognition UI, as illustrated inFIG. 14 (910). In addition, via thespeaker 181, thecontroller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function. In this case, the user may say again “put it to the cart” to add the food to the cart again. - When the function of purchasing food immediately in response to the user's utterance “buy it” is performed normally (yes in 913), the
controller 110 may display the speech recognition UI on thedisplay 120 and display a text “go to the purchase page of the selected food” on the speech recognition UI (914). In addition, via thespeaker 181, thecontroller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function. The speech recognition UI may be displayed in the card type UI on the store application screen. - When the text and the speech indicating that the page goes to the food purchase page is output since the function of purchasing food immediately is performed normally, the store application may display the food purchase screen on the display 120 (915).
- When an error occurs in the function that is to purchase food immediately in response to the user utterance “buy it”, (no in 913), the
controller 110 may display the speech recognition UI on thedisplay 120 and display a text “an error occurs in going to the purchase page. Please check the contents and try again” on the speech recognition UI, as illustrated inFIG. 14 (916). In addition, via thespeaker 181, thecontroller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function. In this case, the user may say again “buy it” to purchase the food again. - Meanwhile, as illustrated in
FIG. 12 , the user may say “cancel” when the user does not want to select the food contained the food list or the user wants to cancel the food purchase process (917). - In response to the user utterance “cancel”, the
controller 110 may display the speech recognition UI on thedisplay 120 and display a text “food search canceled” on the speech recognition UI (918). In addition, via thespeaker 181, thecontroller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function. The speech recognition UI may be displayed in the card type UI on the store application screen. - Further, as illustrated in
FIG. 12 , the user can say the food name not the identification mark (919). When a food having the same name is present in foods contained in the food list or when the user says wrong food name, the food selected by the user may be not identified. In this case, as illustrated inFIG. 12 , in response to the user utterance, thecontroller 110 may display the speech recognition UI on thedisplay 120 and display a text “please tell the number” on the speech recognition UI (920). In addition, via thespeaker 181, thecontroller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function. - As described above, in response to the user utterance, the
controller 110 may not immediately execute the application, but provide information in the card type UI first and when certain information in the information, which is provided in the card type UI, is selected, thecontroller 110 may execute the related application. -
FIGS. 15 and 16 are views illustrating a method in which the refrigerator according to one embodiment of the present disclosure performs a command in accordance with a speech command. - When the user utters a speech about a specified target for executing a speech command, the
controller 110 of therefrigerator 1 may perform a command corresponding to the speech command, as illustrated inFIG. 15 . - As illustrated in
FIG. 15 , when a speech “stop the music” is input via the microphone 182 (1300), thecontroller 110 of therefrigerator 1 may analyze the user speech through the speech recognition function, thereby identifying the user intention. Thecontroller 110 of therefrigerator 1 may analyze the user speech through the speech recognition function. Alternatively, thecontroller 110 of therefrigerator 1 may transmit the user speech data to the first server SV1 and receive the analyzed information of the speech data from the first server SV1. - Based on the user intention being identified as “stopping the music”, the
controller 110 of therefrigerator 1 may identify whether the music application is executed that is the music is played (1301). When the music is played, thecontroller 110 may immediately stop playing the music (1302). - When the music is not played, the
controller 110 may display the speech recognition UI on thedisplay 120 and display a text “music is not playing” on the speech recognition UI (1303). In addition, via thespeaker 181, thecontroller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function. - A speech command in which the target is not specified may be input via the
microphone 182, which is different from the example ofFIG. 15 . That is, as illustrated inFIG. 16 , a speech “stop”, in which the target of the command is omitted, may be input via the microphone 182 (1300). - When the speech “stop” without the target of the command, may be input via the
microphone 182, thecontroller 110 of therefrigerator 1 may analyze the user speech through the speech recognition function, thereby identifying the user intention. Thecontroller 110 of therefrigerator 1 may analyze the user speech through the speech recognition function. Alternatively, thecontroller 110 of therefrigerator 1 may transmit the user speech data to the first server SV1 and receive the analyzed information of the speech data from the first server SV1. - When the target of the command is not contained in the speech input via the
microphone 182, thecontroller 110 may identify whether the controller is capable of performing the speech command on the application displayed on thedisplay 120 which is currently executed (1310). - When the
controller 110 is capable of performing the speech command on the application displayed on thedisplay 120 which is currently executed, thecontroller 110 may perform the speech command on the application displayed on the display 120 (1320). - For example, when the recipe application is currently displayed on the
display 120 and the speech reading the recipe is output via thespeaker 181, thecontroller 110 may immediately stop the function in which the recipe applications reads the recipe. - In other words, when the speech command without the target of the command is input, the
controller 110 may identify whether to perform the command on the application displayed on thedisplay 120, which is currently executed, as the first priority. - When the
controller 110 is not capable of performing the speech command on the application displayed on thedisplay 120 which is currently executed, thecontroller 110 may display the speech recognition UI on thedisplay 120, thereby notifying that it is impossible to perform the speech command (1330). - That is, as illustrated in
FIG. 16 , thecontroller 110 may display the speech recognition UI on thedisplay 120 and display a text “not reading the recipe” on the speech recognition UI. In addition, via thespeaker 181, thecontroller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function. - When the
controller 110 notifies that it is impossible to perform the speech command on the application displayed on thedisplay 120, through the speech recognition UI, thecontroller 110 may identify whether a predetermined priority about the application related to the input speech command is present (1340). When the predetermined priority is present, thecontroller 110 may perform the speech command on the application according to the priority (1350). - For example, according to the priority, the
controller 110 may perform the speech command on the application ranked in the highest priority, and when it is impossible to perform the speech command on the application having the highest priority, thecontroller 110 may perform the speech command on the application ranked in the next highest priority. - When the music application is ranked in the highest priority in the priority order of the application related to the speech command “stop”, the
controller 110 stops playing the music being played by the music application. When the music is not playing, thecontroller 110 may identify whether to perform the application ranked in the next highest priority. Accordingly, thecontroller 110 may perform the speech command on the application in order of the highest priority. - When the predetermined priority about the application related to the input speech command is not present, the
controller 110 may request a confirmation about whether to perform the speech command on other application (1360). - As described above, when the function of reading the recipe is not executed although the recipe application is displayed on the
display 120, thecontroller 110 may identify whether to perform the speech command on other application. - When the radio application is currently executed on the background, the
controller 110 may select the radio application as the target of the speech command, as illustrated inFIG. 16 , and before executing the speech command on the radio application, thecontroller 110 may display the speech recognition UI and request confirmation about whether to execute the speech command. - That is, as illustrated in
FIG. 16 , thecontroller 110 may display the speech recognition UI on thedisplay 120 and display a text “do you want to stop the radio” on the speech recognition UI. In addition, via thespeaker 181, thecontroller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function. - When an answer to the confirmation request is input via the
microphone 182, thecontroller 110 performs a command according to the input answer (1370). - For example, when a positive answer (e.g., “yes”) to the confirmation request “do you want to stop the radio” is input, the
controller 110 stops the execution of the radio application. - As described above, although the target of the command is omitted in the speech command, the
refrigerator 1 according to one embodiment may perform the command meeting the user intention using the method illustrated inFIG. 16 . -
FIGS. 17 to 20 are views illustrating a method in which the refrigerator according to one embodiment of the present disclosure identifies a user and provides information corresponding to the identified user. - As illustrated in
FIGS. 17 to 19 , the user can refer to a specific user, and the user may ask to provide information related to the specific user. - As illustrated in
FIG. 17 , when a speech “show mother's this month calendar” is input via the microphone 182 (2000), thecontroller 110 of therefrigerator 1 may analyze the user speech through the speech recognition function, thereby identifying the user intention. - The
controller 110 of therefrigerator 1 may analyze the user speech through the speech recognition function. Alternatively, thecontroller 110 of therefrigerator 1 may transmit the speech data to the first server SV1 and receive the analyzed information of the speech data from the first server SV1. - The calendar application has been described as an application capable of providing user-specific information, but the application is not limited to the calendar application. Therefore, the speech output setting application (which is referred to as “morning brief” in
FIG. 5 ) capable of outputting contents, which are specified for the user, as a speech, may provide user-specific information according to methods illustrated inFIGS. 17 to 20 . - The
controller 110 ofrefrigerator 1 may identify the user intention contained in the speech by recognizing the user name contained in the speech, and perform the command according to the speech. That is, when it is identified that the user intention is to show a calendar of a mother among a plurality of users, particularly, this month calendar among the mother's calendars, thecontroller 110 displays mother's thismonth calendar 290 on thedisplay 120 by executing the calendar application, as illustrated inFIG. 18 (2030). - As illustrated in
FIG. 18 , a calendar provided from the calendar application may include acalendar part 291 on which date and day of the week are displayed, and aschedule part 292 on which today's schedule is displayed. The configuration of the calendar illustrated inFIG. 18 is merely an example and thus the configuration and arrangement of the calendar may have a variety of shapes. - When the
controller 110 of therefrigerator 1 fails to recognize the name of a specific user contained in the speech, thecontroller 110 performs a process of selecting a specific user among the stored users (2020). - When the
controller 110 of therefrigerator 1 fails to recognize the name of the specific user contained in the speech, thecontroller 110 of therefrigerator 1 may display the speech recognition UI on thedisplay 120 and then display a text “please select a profile” on the speech recognition UI (2021). In addition, via thespeaker 181, thecontroller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function. Thecontroller 110 may display a user profile list, which is pre-registered in therefrigerator 1, together with the speech recognition UI. - The profile list includes registered user names and identification mark for identifying each user name. As illustrated in
FIG. 19 , the identification mark may be displayed in figures, but is not limited thereto. Because a mark capable of distinguishing a plurality of users from each other is sufficient to be the identification mark, the identification mark may be displayed in figures or characters. - As illustrated in
FIG. 19 , the user can utter a speech for selecting a specific user in response to a speech for requesting confirmation for selecting a user among users contained in the profile list (2022). For example, the user can utter an identification mark, in a speech. - In order to select a second user on the profile list, the user may select the identification mark by uttering “two” or “second” (2023 and 2024).
- The
controller 110 may analyze the user speech through the speech recognition function. As illustrated inFIG. 10 , thecontroller 110 may transmit the user speech data to the first server SV1 and receive the analyzed information of the speech data from the first server SV1, thereby identifying the user intention. Alternatively, thecontroller 110 of therefrigerator 1 itself may analyze the user speech using the speech recognition function and identify the user intention. - The
controller 110 may select a user name corresponding to the identification mark uttered by the user, and display the selected user's calendar on thedisplay 120. That is, thecontroller 110 selects a user corresponding to “two” or “second” uttered by the user, as a mother, and as illustrated inFIG. 18 , displays the mother's this month calendar on the display 120 (2030). - In addition, the user can utter the user name, not the identification mark (2025).
- When the user utters an incorrect user name or a different expression irrelevant to the user name, the profile selected by the user may not be identified. In such a case, the
controller 110 may display the speech recognition UI on thedisplay 120 in response to the user utterance, and display a text “say the profile number” on the speech recognition UI (2026). In addition, via thespeaker 181, thecontroller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function. - When the
controller 110 asks the request for selecting the profile more than twice (2027), thecontroller 110 may display the speech recognition UI on thedisplay 120 and display a text “it is impossible to get accurate information” on the speech recognition UI, as illustrated inFIG. 19 (2028). In addition, via thespeaker 181, thecontroller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function. - Although not shown in the drawing, when the user does not want to select a user contained in the profile list, or wants to cancel the calendar display itself, the user can say “cancel”. The
controller 110 may display the speech recognition UI on thedisplay 120 in response to the user utterance, and display a text “it is canceled” on the speech recognition UI. In addition, via thespeaker 181, thecontroller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function. - As illustrated in
FIG. 20 , without referring to a specific user, the user may ask to provide information, which is stored differently for each user. - As illustrated in
FIG. 20 , when a speech “show this month calendar” is input via the microphone 182 (2040), thecontroller 110 of therefrigerator 1 may analyze the user speech through the speech recognition function, thereby identifying the user intention. Thecontroller 110 of therefrigerator 1 may analyze the user speech through the speech recognition function. Alternatively, thecontroller 110 of therefrigerator 1 may transmit the speech data to the first server SV1 and receive the analyzed information of the speech data from the first server SV1. - When an expression indicating the specific user is not contained in the speech data, the
controller 110 of therefrigerator 1 compares the speech data with speech data of the users, which is stored in advance, and identifies whether speech data matching with the speech data is present the pre-stored speech data (2041). - The
controller 110 of therefrigerator 1 compares parameters of the speech data, such as the amplitude, waveform, frequency or period, which is input via themicrophone 182, with parameters of the pre-stored speech data, and selects speech data, which is identified as to be identical to the speech data input via themicrophone 182, among the pre-stored speech data. - When it is identified that speech data, which is identified to be matched with the speech data input via the
microphone 182, is present among the pre-stored speech data, thecontroller 110 may display a user's calendar indicated by the selected speech data, on the display 120 (2042). For example, when it is identified that the matched speech data among the pre-stored speech data is the mother, thecontroller 110 may display the mother's this month calendar on thedisplay 120. - When it is identified that speech data, which is identified to be matched with the speech data input via the
microphone 182, is not present among the pre-stored speech data (no in 2041), thecontroller 110 of therefrigerator 1 may display the speech recognition UI on thedisplay 120 and then display a text “please select a profile” on the speech recognition UI as illustrated inFIG. 19 (2021). In addition, via thespeaker 181, thecontroller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function. Thecontroller 110 may display a user profile list, which is pre-registered in therefrigerator 1, together with the speech recognition UI. - As illustrated in
FIG. 19 , the user can utter a speech for selecting a specific user in response to a speech for requesting confirmation for selecting a user among users contained in the profile list (2022). For example, the user can utter an identification mark, in a speech. - In order to select a second user on the profile list, the user may select the identification mark by uttering “two” or “second” (2023 and 2024).
- The
controller 110 may select a user name corresponding to the identification mark uttered by the user, and display the selected user's calendar on thedisplay 120. That is, thecontroller 110 selects a user corresponding to “two” or “second” uttered by the user, as the mother, and as illustrated inFIG. 18 , displays the mother's this month calendar on the display 120 (2030). - In addition, the user can utter the user name, not the identification mark (2025). When the user utters an incorrect user name or a different expression irrelevant to the user name, the profile selected by the user may not be identified. In such a case, as illustrated in
FIG. 19 , thecontroller 110 may display the speech recognition UI on thedisplay 120 in response to the user's utterance, and display a text “say the profile number” on the speech recognition UI (2026). In addition, via thespeaker 181, thecontroller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function. - When the
controller 110 asks the request for selecting the profile more than twice (2027), thecontroller 110 may display the speech recognition UI on thedisplay 120 and display a text “it is impossible to get accurate information” on the speech recognition UI, as illustrated inFIG. 19 (2028). In addition, via thespeaker 181, thecontroller 110 may output the text, which is displayed on the speech recognition UI, as a speech using the TTS function. - As described above, although the user does not utter a specific user and asks to provide information, which is stored differently for each user, the
refrigerator 1 according to one embodiment may identify the user and provide information that is appropriate for the user, according to the method illustrated inFIG. 20 . -
FIG. 21A is a block diagram of a controller for data training and recognition according to one embodiment of the present disclosure. - Referring to
FIG. 21A , acontroller 2100 may include a data leaner 2110 and adata recognizer 2120. Thecontroller 2100 may correspond to thecontroller 110 of therefrigerator 1. Alternatively, thecontroller 2100 may correspond to a controller (not shown) of the first server SV1. - The data leaner 2110 may train a learning network model to have a criterion to recognize speech data. For example, the data leaner 2110 may train the learning network model to have criteria to recognize the food name corresponding to the first speech or the identification mark corresponding to the second speech. Alternatively, the data leaner 2110 may train the learning network model to have a criterion to recognize images. For example, the data leaner 2110 may train the learning network model to have a criterion to recognize foods stored in the storage compartment based on images captured by the storage compartment of the
refrigerator 1. - The data leaner 2110 may train the learning network model (i.e., data recognition model) using training data according to a supervised learning method or an unsupervised learning method based on the artificial intelligence algorithms.
- According to one embodiment, the data leaner 2110 may train the learning network model by using training data such as speech data and a text (e.g., food names and identification marks) corresponding to the speech data. Alternatively, the data leaner 2110 may train the learning network model by using training data such as speech data and user information on a user uttering the speech data. Alternatively, the data leaner 2110 may train the learning network model by using training data such as image data and a text (e.g., food names) corresponding to the image data.
- The
data recognizer 2120 may recognize speech data by applying speech data as feature data to the learning network model that is trained. For example, thedata recognizer 2120 may recognize the food name or the identification mark corresponding to the speech data, by applying the speech data related to the first speech or the second speech, as feature data to the learning network model. The recognition result of the learning network model (i.e., data recognition model) may be used to refine the learning network model. Alternatively, thedata recognizer 2120 may recognize the user uttering the speech data by applying the speech data as feature data to the learning network model. - Further, the
data recognizer 2120 may recognize image data by applying image data, which is acquired by capturing thestorage compartment 20, as feature data to the learning network model that is trained. For example, thedata recognizer 2120 may recognize the food name corresponding to the image data, by applying the image data as feature data to the learning network model. - At least one of the data leaner 2110 and the
data recognizer 2120 may be manufactured as at least one hardware chip and then mounted to an electronic device. For example, at least one of the data leaner 2110 and thedata recognizer 2120 may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI) or manufactured as a part of a conventional general purpose processor (e.g., CPU or application processor) or a dedicated graphics processor (e.g., GPU) and then mounted to the above mentioned electronic device. At this time, the dedicated hardware chip for the artificial intelligence is a dedicated processor specialized for probability calculation, and it has a higher parallel processing performance than conventional general purpose processors and thus it can quickly process the computation in artificial intelligence such as machine learning. - In this case, the data leaner 2110 and the
data recognizer 2120 may be mounted on one electronic device or on separate electronic devices, respectively. For example, one of the data leaner 2110 and thedata recognizer 2120 may be contained in therefrigerator 1 and the other may be contained in the server SV1. Through the wired or wireless communication, at least a part of the learning network model constructed by the data leaner 2110 may be provided to thedata recognizer 2120 and data, which is input to thedata recognizer 2120, may be provided as additional training data to the data leaner 2110. - At least one of the data leaner 2110 and the
data recognizer 2120 may be implemented as a software module. When at least one of the data leaner 2110 and thedata recognizer 2120 is implemented as a software module (or a program module including instructions), the software module may be stored in non-transitory computer readable media. In this case, at least one software module may be provided by Operating System (OS) or a certain application. Alternatively, some of the at least one software module may be provided by the OS and the remaining some may be provided by a certain application. -
FIG. 21B is a detail block diagram of thedata learner 2110 and thedata recognizer 2120 according to one embodiment of the present disclosure. - (a) of
FIG. 21B is a block diagram of the data leaner 2110 according to one embodiment of the present disclosure. - Referring to (a) of
FIG. 21B , according to one embodiment, thedata learner 2110 may include a data acquirer 2110-1, a pre-processor 2110-2, a training data selector 2110-3, a model learner 2110-4 and a model evaluator 2110-5. Depending on some embodiment, thedata learner 2110 may necessarily include the data acquirer 2110-1 and the model learner 2110-4, and selectively include at least one of the pre-processor 2110-2, the training data selector 2110-3, and the model evaluator 2110-5, or may not include all of the pre-processor 2110-2, the training data selector 2110-3, and the model evaluator 2110-5. - The data acquirer 2110-1 may acquire data need for learning a criterion for recognizing speech data or image data.
- For example, the data acquirer 2110-1 may acquire speech data or image data. The data acquirer 2110-1 may acquire speech data or image data from the
refrigerator 1. Alternatively, the data acquirer 2110-1 may acquire speech data or image data from a third device (e.g., a mobile terminal or a server) connected to therefrigerator 1 through the communication. Alternatively, the data acquirer 2110-1 may acquire speech data or image data from a device configured to store or manage training data, or a data base. - The pre-processor 2110-2 may pre-process the speech data or image data. The pre-processor 2110-2 may process the acquired speech data or image data into a predetermined format so that the model learner 2110-4, which will be described later, may use the data acquired for learning for the situation determination.
- For example, the pre-processor 2110-2 may remove noises from data, such as the speech data or image data, acquired by the data acquirer 2110-1 so as to select effective data, or process the data into a certain format. Alternatively, the pre-processor 2110-2 may process the acquired data form into a form of data suitable for learning.
- The training data selector 2110-3 may randomly select speech data or image data, which is needed for learning, from the pre-processed data according to the pre-determined criteria or the training data selector 2110-3 may randomly select the speech data or image data. The selected training data may be provided to the model leaner 2110-4. The pre-determined criteria may include at least one of an attribute of data, a generation time of data, a place where data is generated, an apparatus for generating data, a reliability of data, and a size of data.
- In addition, the training data selector 2110-3 may select training data according to the criterion that is pre-determined by the training of the model learner 2110-4 described later.
- The model leaner 2110-4 may train the learning network model so as to have a criterion for recognizing speech data or image data based on the training data. For example, the model leaner 2110-4 may train the learning network model so as to have a criterion for recognizing the food name or the identification mark corresponding to the speech data. For example, the model leaner 2110-4 may train the learning network model so as to have a criterion for recognizing the food stored in the storage compartment based on the image data of the storage compartment of the
refrigerator 1. - In this case, the learning network model may be a model constructed in advance. For example, the learning network model may be a model that is pre-constructed by receiving basic training data (e.g., sample image data or speech data).
- As a result of training of the model leaner 2110-4, the learning network model may be set to recognize (or determine, estimate, infer, predict) the food name or the identification mark corresponding to the speech. Alternatively, the learning network model may be set to recognize (or determine, estimate, infer, predict) the user uttering the speech. Alternatively, the learning network model may be set to recognize (or determine, estimate, infer, predict) the food stored in the storage compartment based on the image data.
- The learning network model may be a model based on a neural network. The learning network model may be designed to mimic the human brain structure on a computer. The learning network model may include a plurality of network nodes having weights to mimic the neuron of the neural network. The plurality of network nodes may form the respective connection to mimic the synaptic activity of neuron that sends and receive a signal through the synapses. For example, the learning network model may include a neural network model or a deep learning model developed from the neural network model. The plurality of network nodes may be placed in different depth (or, layers) in the deep learning model and send and receive data according to the convolution connection. For example, a deep neural network (DNN), a recurrent neural network (RNN), or a bicommandal recurrent deep neural network (BRDNN) may be used as the learning network model, but is not limited thereto.
- According to various embodiments, when a plurality of pre-constructed learning network model is present, the model learner 2110-4 may select a learning network model, which is closely related to input training data and basic training data, as a learning network model to be trained. For example, the model learner 2110-4 may train the learning network model by using the learning algorithm including error back-propagation method or gradient descent method. Alternatively, the model learner 2110-4 may train the learning network model through the supervised learning method using an input value as the training data. Alternatively, the model learner 2110-4 may allow the learning network model to learn through the unsupervised learning method, and the unsupervised learning method learns without any the supervision and finds out a criterion. Alternatively, the model learner 2110-4 may train the learning network model through the reinforcement learning method using a feedback related to whether a result of learning is correct or not.
- Furthermore, when a learning network model is trained, the model learner 2110-4 may store the trained learning network model. In this case, the model learner 2110-4 may store the trained learning network model in the
storage 130 of therefrigerator 1. Alternatively, the model learner 2110-4 may store the trained learning network model in the memory of the first server SV1 or the memory of the second server SV2 connected to therefrigerator 1 via a wired or wireless network. - In this case, the
storage 130 in which the trained learning network model is stored may also store instructions or data associated with, for example, at least one other element of therefrigerator 1. The memory may also store software and/or programs. For example, the program may include a kernel, a middleware, an application programming interface (API), and/or an application program (or “application”). - The model evaluator 2110-5 may input evaluation data to the learning network model and, when a recognition result output from the evaluation data does not satisfy a predetermined criterion, the model evaluator 2110-5 may make the model learner 2110-4 to learn again. In this case, the evaluation data may be pre-set data for evaluating the learning network model.
- For example, when the number or a ratio of evaluation data with incorrect recognition results from among recognition results of the trained learning network model with respect to evaluation data exceeds a predetermined threshold value, the model evaluator 2110-5 may evaluate that the predetermined criterion is not satisfied. For example, when the predetermined criterion is defined as a ratio of 2% and the trained learning network model outputs incorrect recognition results for evaluation data exceeding 20 out of a total of 1000 pieces of evaluation data, the model evaluator 2110-5 may evaluate that the trained learning network model is inappropriate.
- On the other hand, when the plurality of trained learning network models is present, the model evaluator 2110-5 may evaluate whether each of the trained learning network models satisfies the predetermined criteria and may select a learning network model satisfying the predetermined criteria as a final learning network model. In this case, when the plurality of learning network models satisfying the predetermined criteria is present, the model evaluator 2110-5 may select one or a certain number of learning network model, as a final learning network model, and the one or certain number of learning network model may be pre-selected according to the evaluation score when the higher priority is given to the higher evaluation score.
- At least one of the data acquirer 2110-1, the pre-processor 2110-2, the training data selector 2110-3, the model learner 2110-4, and the model evaluator 2110-5 in the
data learner 2110 may be manufactured as at least one hardware chip and then mounted to an electronic device. For example, at least one of the data acquirer 2110-1, the pre-processor 2110-2, the training data selector 2110-3, the model learner 2110-4, and the model evaluator 2110-5 may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI) or manufactured as a part of a conventional general purpose processor (e.g., CPU or application processor) or a dedicated graphics processor (e.g., GPU) and then mounted to the above mentioned electronic device. - In this case, the data acquirer 2110-1, the pre-processor 2110-2, the training data selector 2110-3, the model learner 2110-4, and the model evaluator 2110-5 may be mounted on one electronic device or on separate electronic devices, respectively. For example, some of the data acquirer 2110-1, the pre-processor 2110-2, the training data selector 2110-3, the model learner 2110-4, and the model evaluator 2110-5 may be contained in the
refrigerator 1 and the remaining some may be contained in the server SV1 or the server SV2. - At least one of the data acquirer 2110-1, the pre-processor 2110-2, the training data selector 2110-3, the model learner 2110-4, and the model evaluator 2110-5 may be implemented as a software module. When at least one of the data acquirer 2110-1, the pre-processor 2110-2, the training data selector 2110-3, the model learner 2110-4, and the model evaluator 2110-5 is implemented as a software module (or a program module including instructions), the software module may be stored in non-transitory computer readable media. In this case, at least one software module may be provided by Operating System (OS) or a certain application. Alternatively, some of the at least one software module may be provided by the OS and the remaining some may be provided by a certain application.
- (b) of
FIG. 21B is a block diagram of thedata recognizer 2120 according to one embodiment of the present disclosure. - Referring to (b) of
FIG. 21B , thedata recognizer 2120 according to one embodiment may include a data acquirer 2120-1, a pre-processor 2120-2, a feature data selector 2120-3, a recognition result provider 2120-4, and a model updater 2120-5. Depending on some embodiments, thedata recognizer 2120 according to one embodiment may necessarily include the data acquirer 2120-1 and the recognition result provider 2120-4, and selectively include at least one of the pre-processor 2120-2, the feature data selector 2120-3, and the model updater 2120-5. - The
data recognizer 2120 may recognize the food name and the food mark corresponding to the speech data, or the user uttering the speech data, by applying the speech data as the feature data to the trained learning network model. Alternatively, thedata recognizer 2120 may recognize the food stored in thestorage compartment 20 of therefrigerator 1, by applying the image data as the feature data to the trained learning network model. - First, the data acquirer 2120-1 may acquire speech data needed for recognizing food name and the food mark and the user uttering the speech, from the speech data. Alternatively, the data acquirer 2120-1 may acquire image data for recognizing the food from the image data.
- For example, the data acquirer 2120-1 may acquire data, which is directly input from the user or selected by the user, or may acquire a variety of sensing information detected by various sensors of the
refrigerator 1. Alternatively, the data acquirer 2120-1 may acquire data from an external device (e.g., mobile terminal or a server) communicated with therefrigerator 1. - The pre-processor 2120-2 may pre-process the speech data or image data. The pre-processor 2120-2 may process the acquired speech data or image data into a predetermined format so that the recognition result provider 2120-4, which will be described later, may use the data acquired for training for the situation determination.
- For example, the pre-processor 2120-2 may remove noises from data, such as the speech data or image data, acquired by the data acquirer 2120-1 so as to select effective data, or process the data into a certain format. Alternatively, the pre-processor 2120-2 may process the acquired data form into a form of data suitable for training.
- The feature data selector 2120-3 may randomly select speech data or image data, which is needed for recognition, from the pre-processed data according to the pre-determined criteria, or the feature data selector 2120-3 may randomly select the speech data or the image data, from the pre-processed data. The selected feature data may be provided to the model leaner 2110-4. The pre-determined criteria may include at least one of an attribute of data, a generation time of data, a place where data is generated, an apparatus for generating data, a reliability of data, and a size of data.
- The recognition result provider 2120-4 may recognize the selected feature data by applying the selected feature data to the learning network model. For example, the recognition result provider 2120-4 may recognize the food name or the identification mark corresponding to the speech by applying the selected speech data to the learning network mode. Alternatively, the recognition result provider 2120-4 may recognize the user uttering the speech by applying the selected speech data to the learning network model. Alternatively, the recognition result provider 2120-4 may recognize the food corresponding to the image data by applying the selected image data to the learning network model.
- The model updater 2120-5 may allow the learning network model to be refined, based on the evaluation of the recognition result provided from the recognition result provider 2120-4. For example, the model updater 2120-5 may allow the model leaner 2110-4 to refine the learning network model, by providing the food name, the identification mark or the user information, which is provided from the recognition result provider 2120-4, to the model leaner 2110-4 again.
- At least one of the data acquirer 2120-1, the pre-processor 2120-2, the feature data selector 2120-3, the recognition result provider 2120-4, and the model updater 2120-5 in the
data recognizer 2120 may be manufactured as at least one hardware chip and then mounted to an electronic device. For example, at least one of the data acquirer 2120-1, the pre-processor 2120-2, the feature data selector 2120-3, the recognition result provider 2120-4, and the model updater 2120-5 may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI) or manufactured as a part of a conventional general purpose processor (e.g., CPU or application processor) or a dedicated graphics processor (e.g., GPU) and then mounted to the above mentioned electronic device. - In addition, the data acquirer 2120-1, the pre-processor 2120-2, the feature data selector 2120-3, the recognition result provider 2120-4, and the model updater 2120-5 may be mounted on one electronic device or on separate electronic devices, respectively. For example, some of the data acquirer 2120-1, the pre-processor 2120-2, the feature data selector 2120-3, the recognition result provider 2120-4, and the model updater 2120-5 may be contained in the
refrigerator 1 and the remaining some may be contained in the server. - At least one of the data acquirer 2120-1, the pre-processor 2120-2, the feature data selector 2120-3, the recognition result provider 2120-4, and the model updater 2120-5 may be implemented as a software module. When at least one of the data acquirer 2120-1, the pre-processor 2120-2, the feature data selector 2120-3, the recognition result provider 2120-4, and the model updater 2120-5 is implemented as a software module (or a program module including instructions), the software module may be stored in non-transitory computer readable media. In this case, at least one software module may be provided by Operating System (OS) or a certain application. Alternatively, some of the at least one software module may be provided by the OS and the remaining some may be provided by a certain application.
-
FIG. 22 is a flow chart of the refrigerator displaying information according to one embodiment of the present disclosure - The
refrigerator 1 may identify whether a first speech including a food name is input through the microphone 182 (2201). - When it is identified that the first speech is input based on the result of the identification, the
refrigerator 1 may display a food list including food information corresponding to the food name and an identification mark for identifying food information, on thedisplay 120 installed on the front surface of therefrigerator 1 based on the recognition of the first speech (2202). - For example, based on the first speech being identified, the
refrigerator 1 may acquire the food name by recognizing the first speech using the learning network model, which is trained through the artificial intelligence algorithm, and display the food list including the food information corresponding to the food name, on thedisplay 120. In this case, the learning network model may be a learning network model that is trained using a plurality of speeches and a plurality of words corresponding to the plurality of speeches. - As another example, based on the first speech being identified, the
refrigerator 1 may acquire the food name and information on the user uttering the first speech by recognizing the first speech using the learning network model, which is trained through the artificial intelligence algorithm, and display the food list including the food information (e.g., information on a food preferred by the user) which corresponds to the food name and is related to the user information, on thedisplay 120. In this case, the learning network model may be a learning network model that is trained using a plurality of speeches and the user information respectively corresponding to the plurality of speeches. - As yet another example, based on the first speech being identified, the
refrigerator 1 may identify whether a food related to the recognized food name is placed in thestorage compartment 20 of therefrigerator 1. When it is identified that the food is not placed in thestorage compartment 20, therefrigerator 1 may display the food list including the food information corresponding to the food name, on thedisplay 120. Meanwhile, when it is identified that the food is placed in thestorage compartment 20, therefrigerator 1 may display information indicating that the food related to the recognized food name is placed in thestorage compartment 30, on thedisplay 120. - Next, the
refrigerator 1 may identify whether a second speech indicating the identification mark is input via the microphone 182 (2203). - When it is identified that the second speech is input, based on the result of the identification, the
refrigerator 1 may display at least one piece of purchase information of food corresponding to the identification mark, on thedisplay 120 installed on the front surface of therefrigerator 1 based on the recognition of the second speech (2204). - For example, based on the second speech being identified, the
refrigerator 1 may acquire the identification mark by recognizing the second speech using the learning network model, which is trained through the artificial intelligence algorithm, and display the food purchase information corresponding to the identification mark, on the display. -
FIG. 23 is a flow chart of a network system according to one embodiment of the present disclosure. - Referring to
FIG. 23 , a network system may include afirst element 2301 andsecond element 2302. Thefirst element 2301 may be therefrigerator 1, and thesecond element 2302 may be the server SV1 in which the data recognition model is stored, or a cloud computing including at least one server. Alternatively, thefirst element 2301 may be a general purpose processor and thesecond element 2302 may be an artificial intelligence dedicated processor. Alternatively, thefirst element 2301 may be at least one application, and thesecond element 2302 may be an operating system (OS). That is, thesecond element 2302 may be more integrated, more dedicated, and have smaller delay, greater performance, and more resources, in comparison with thefirst element 2301, and thus thesecond element 2302 may more quickly and effectively process operations, which are required at the time of generation, update, or application of the data recognition model, in comparison with thefirst element 2301. - Meanwhile, an interface for transmitting/receiving data between the
first element 2301 and thesecond element 2302 may be defined. - For example, an API having feature data to be applied to a data recognition model as an parameter (or an intermediate value or a transfer value) may be defined. An API may be defined as a set of subroutines or functions that may be called by any one protocol (e.g., a protocol defined in the refrigerator 1) for a certain processing of another protocol (e.g., a protocol defined in the server SV1). In other words, an environment in which an operation of another protocol may be performed by any one protocol through an API may be provided.
- Referring to
FIG. 23 , thefirst element 2301 may receive the user first speech including the food name through the microphone 182 (2310). - The
first element 2301 may convert an input analog speech signal into speech data corresponding to a digital signal and transmit the digital signal to thesecond element 2302. At this time, thefirst element 2301 may change the speech data according to the defined communication format and transmit the changed speech data to the second element 2302 (2311). - The
second element 2302 may recognize the received speech data (2312). For example, thesecond element 2302 may recognize speech data by applying the received speech data into the trained learning network model, and acquire recognition information based on the result of the recognition. For example, thesecond element 2302 may acquire the food name and information on the user uttering the speech, from the speech data. Thesecond element 2302 may transmit the acquired recognition information to the first element 2301 (2313). - The
first element 2301 may acquire information, which is requested by the speech data, based on the received recognition information (2314). For example, thefirst element 2301 may acquire food information corresponding to the food name, which corresponds to information requested by the speech data, from the storage of thefirst element 2301 or from athird element 2303. Thethird element 2303 may be a server communicated with thefirst element 2301. For example, the server may be the server SV2 configured to transmit information requested by the speech data inFIGS. 9 and 10 . - In
operation 2312, thesecond element 2302 may directly transmit the recognized recognition information to thethird element 2303 without passing through the first element 2301 (2323). In this case, thethird element 2303 may acquire information requested by the speech data, based on the received recognition data and transmit the acquired information, which is requested by the speech data, to the first element 2301 (2324). Thefirst element 2301 may acquire information, which is requested by the speech data, from thethird element 2303. - When the information, which is requested by the speech data, is acquired in
operations 2314 and 2324, thefirst element 2301 may display a food list including food information corresponding to the food name and an identification mark for identifying the food information, which is requested by the speech data (2315). - Meanwhile, when the
first element 2301 confirms the food list including the identification mark and a user second speech indicating one identification mark is recognized, thefirst element 2301 may display at least one piece of food purchase information corresponding to the identification mark. -
FIG. 24 is a view illustrating a method in which a user purchases food via a refrigerator according to another embodiment of the present disclosure. - As illustrated in
FIG. 24 , a first speech for ordering a specific food may be input through a microphone 182 (980). - A
controller 110 may recognize the first speech including the food name. As a recognition result, thecontroller 110 may acquire the recognized food name and information on the user uttering the first speech. - Based on the food name and the user information being acquired, the
controller 110 may acquire food information, which is preferred by the user uttering the first speech, related to the food name (981). - Based on the acquired food information, the
controller 110 may display a food list including information on a plurality of foods preferred by the user and identification marks for identifying the plurality of pieces of food information, on the display 120 (982). - Next, a second speech indicating a particular identification mark among the identification marks contained in the food list may be input via the microphone 182 (983).
- The
controller 110 may recognize the second speech including the identification mark. As a result of the recognition, thecontroller 110 may display the purchase information of the food corresponding to the identification mark, on the display 120 (984). - Next, a third speech requesting the food purchase may be input through the microphone 182 (985). The third speech requesting the food purchase may be a speech “buy now” uttered by the user.
- The
controller 110 may recognize the third speech. Thecontroller 110 may display an UI (e.g., UI for payment) on thedisplay 120 for purchasing food as a recognition result of the third speech. In this situation, thecontroller 110 may control thecommunication circuitry 140 so that a message indicating of purchase status is output to an external device 986-1 (986). The message indicating of purchase status may include a message for confirming whether to purchase a food before purchasing the food, a message for asking to input a password for purchasing the food, or a message indicating a result of purchasing after purchasing the food. - For example, the
controller 110 may recognize at least one of the first speech and the third speech to acquire the information of the user uttering the speech. Thecontroller 110 may transmit a message informing the purchase status to a user device corresponding to the user information, by using the acquired user information. When the food, which is asked by the user, is placed in thestorage compartment 20 of therefrigerator 1, thecontroller 110 may transmit information indicating the information on the stored food, to the user device. In addition, thecontroller 110 may transmit the message to a device belonging to a guardian of the user uttering the speech. For example, when a person who ordered the food is a child, thecontroller 110 may send a message to a device belonging to the child's parent to notify the purchase status described above. - In response to the received message, the external device 986-1 receiving the message may indicate the purchase status in a text, a speech, an image, a haptic manner or by emitting light, but is not limited thereto.
- According to some embodiments, when the purchase of the food occurs via the
refrigerator 1, thecontroller 110 may transmit a message indicating the purchase status to a device, which is pre-selected, or the external device 986-1 communicated with therefrigerator 1. Alternatively, thecontroller 110 may transmit a message informing the purchase status to the external device 986-1 satisfying the predetermined criteria. At this time, the external device 986-1 satisfying the predetermined criteria may be an external device which is placed a certain place (e.g., a living room, or a main room), an external device having a signal, which is for communicating with therefrigerator 1 and has a certain intensity or more and, an external device placed in a certain distance from therefrigerator 1. In addition, the predetermined criteria may be the external device 981-1 communicated with therefrigerator 1 according to a certain communication method. For example, the certain communication method may include the local area communication system (e.g., WiFi, Bluetooth, or Zigbee) or the direct communication method. When therefrigerator 1 is communicated with the external device 986-1 according to the local area communication method, thecontroller 110 may transmit a push message informing the purchase status to the external device 986-1 via a communication channel established according to the local area communication method. - The term “module” used for the present disclosure, for example, may mean a unit including one of hardware, software, and firmware or a combination of two or more thereof. A “module,” for example, may be interchangeably used with terminologies such as a unit, logic, a logical block, a component, a circuit, etc. The “module” may be a minimum unit of a component integrally configured or a part thereof. The “module” may be a minimum unit performing one or more functions or a portion thereof. The “module” may be implemented mechanically or electronically. For example, the “module” according to the present disclosure may include at least one of an application-specific integrated circuit (ASIC) chip performing certain operations, a field-programmable gate arrays (FPGAs), or a programmable-logic device, known or to be developed in the future.
- The above-mentioned embodiments may be implemented in the form of a program instruction executed by a variety of computer means and stored in computer-readable storage medium. The computer-readable storage medium may include program instructions, data files, and data structures as itself or a combination therewith. In addition, the program instruction may be particularly designed to implement the present disclosure or may be implemented by using various functions or definition that are well-known and available to a group of ordinary skill in the computer software field. The computer-readable storage medium may include a hard disk, a magnetic media, a floppy disk, a magnetic media (e.g., a magnetic tape), an optical media (e.g., a compact disc read only memory (CD-ROM) and a digital versatile disc (DVD), a magneto-optical media (e.g., a floptical disk), and hardware devices (e.g., a read only memory (ROM), a random access memory (RAM), or a flash memory). Also, a program instruction may include not only a mechanical code such as things generated by a compiler but also a high-level language code executable on a computer using an interpreter. The above hardware unit may be configured to operate via one or more software modules for performing an operation of the present disclosure, and vice versa.
- Also, a method according to the disclosed embodiments may be provided in a computer program product.
- A computer program product may include software program, computer-readable storage medium in which S/W program is stored, or applications traded between a seller and a purchaser.
- For example, a computer program product may include an application in the form of a software program (e.g., a downloadable application) that is electronically distributed by the
refrigerator 1, the server SV1, or the server SV2 or a manufacturer of the devices or through an electronic marketplace (e.g., Google Play Store, App Store, etc.). For electronic distribution, at least a portion of a software program may be stored on a storage medium or may be generated temporarily. In this case, the storage medium may be a server of a manufacturer, a server of an electronic marketplace, or a storage medium of a relay server for temporarily storing the software program. - While the present disclosure has been particularly described with reference to exemplary embodiments, it should be understood by those of skilled in the art that various changes in form and details may be made without departing from the spirit and scope of the present disclosure.
Claims (24)
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20170000880 | 2017-01-03 | ||
KR10-2017-0000880 | 2017-01-03 | ||
KR1020170179174A KR102412202B1 (en) | 2017-01-03 | 2017-12-26 | Refrigerator and method of displaying information thereof |
KR10-2017-0179174 | 2017-12-26 | ||
PCT/KR2017/015548 WO2018128317A1 (en) | 2017-01-03 | 2017-12-27 | Refrigerator and information display method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190348044A1 true US20190348044A1 (en) | 2019-11-14 |
US11521606B2 US11521606B2 (en) | 2022-12-06 |
Family
ID=62917641
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/476,008 Active 2039-11-12 US11521606B2 (en) | 2017-01-03 | 2017-12-27 | Refrigerator and information display method thereof |
Country Status (4)
Country | Link |
---|---|
US (1) | US11521606B2 (en) |
EP (1) | EP3546869A4 (en) |
KR (1) | KR102412202B1 (en) |
CN (1) | CN110168298B (en) |
Cited By (84)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190267000A1 (en) * | 2018-02-23 | 2019-08-29 | Accenture Global Solutions Limited | Adaptive interactive voice response system |
US20210081883A1 (en) * | 2019-09-18 | 2021-03-18 | Divert, Inc. | Systems and methods for determining compliance with an entity's standard operating procedures |
US10965489B2 (en) * | 2019-08-30 | 2021-03-30 | Lg Electronics Inc. | Artificial intelligence refrigerator and method for controlling the same |
US11200894B2 (en) | 2019-06-12 | 2021-12-14 | Sonos, Inc. | Network microphone device with command keyword eventing |
US11200888B2 (en) * | 2019-05-09 | 2021-12-14 | Lg Electronics Inc. | Artificial intelligence device for providing speech recognition function and method of operating artificial intelligence device |
US11212612B2 (en) | 2016-02-22 | 2021-12-28 | Sonos, Inc. | Voice control of a media playback system |
US11211071B2 (en) * | 2018-12-14 | 2021-12-28 | American International Group, Inc. | System, method, and computer program product for home appliance care |
US11217250B2 (en) * | 2019-03-27 | 2022-01-04 | Samsung Electronics Co., Ltd. | Multi-modal interaction with intelligent assistants in voice command devices |
US11288039B2 (en) | 2017-09-29 | 2022-03-29 | Sonos, Inc. | Media playback system with concurrent voice assistance |
US11308959B2 (en) | 2020-02-11 | 2022-04-19 | Spotify Ab | Dynamic adjustment of wake word acceptance tolerance thresholds in voice-controlled devices |
US11308958B2 (en) | 2020-02-07 | 2022-04-19 | Sonos, Inc. | Localized wakeword verification |
US11308962B2 (en) * | 2020-05-20 | 2022-04-19 | Sonos, Inc. | Input detection windowing |
US11315556B2 (en) | 2019-02-08 | 2022-04-26 | Sonos, Inc. | Devices, systems, and methods for distributed voice processing by transmitting sound data associated with a wake word to an appropriate device for identification |
US11315553B2 (en) * | 2018-09-20 | 2022-04-26 | Samsung Electronics Co., Ltd. | Electronic device and method for providing or obtaining data for training thereof |
US11330335B1 (en) * | 2017-09-21 | 2022-05-10 | Amazon Technologies, Inc. | Presentation and management of audio and visual content across devices |
US11328722B2 (en) * | 2020-02-11 | 2022-05-10 | Spotify Ab | Systems and methods for generating a singular voice audio stream |
US11335089B2 (en) * | 2019-07-01 | 2022-05-17 | Zhejiang Normal University | Food detection and identification method based on deep learning |
US11343614B2 (en) | 2018-01-31 | 2022-05-24 | Sonos, Inc. | Device designation of playback and network microphone device arrangements |
US11361756B2 (en) | 2019-06-12 | 2022-06-14 | Sonos, Inc. | Conditional wake word eventing based on environment |
US11405430B2 (en) | 2016-02-22 | 2022-08-02 | Sonos, Inc. | Networked microphone device control |
US11432030B2 (en) | 2018-09-14 | 2022-08-30 | Sonos, Inc. | Networked devices, systems, and methods for associating playback devices based on sound codes |
CN115077160A (en) * | 2021-03-10 | 2022-09-20 | 松下电器研究开发(苏州)有限公司 | Refrigerator, intelligent refrigerator system |
CN115077158A (en) * | 2021-03-10 | 2022-09-20 | 松下电器研究开发(苏州)有限公司 | Refrigerator, intelligent refrigerator system |
CN115077155A (en) * | 2021-03-10 | 2022-09-20 | 松下电器研究开发(苏州)有限公司 | Refrigerator and intelligent refrigerator system |
CN115077157A (en) * | 2021-03-10 | 2022-09-20 | 松下电器研究开发(苏州)有限公司 | Refrigerator and intelligent refrigerator system |
CN115077161A (en) * | 2021-03-10 | 2022-09-20 | 松下电器研究开发(苏州)有限公司 | Refrigerator and intelligent refrigerator system |
CN115077162A (en) * | 2021-03-10 | 2022-09-20 | 松下电器研究开发(苏州)有限公司 | Refrigerator and intelligent refrigerator system |
US20220319511A1 (en) * | 2019-07-22 | 2022-10-06 | Lg Electronics Inc. | Display device and operation method for same |
US11482224B2 (en) | 2020-05-20 | 2022-10-25 | Sonos, Inc. | Command keywords with input detection windowing |
US11482978B2 (en) | 2018-08-28 | 2022-10-25 | Sonos, Inc. | Audio notifications |
US11501773B2 (en) | 2019-06-12 | 2022-11-15 | Sonos, Inc. | Network microphone device with command keyword conditioning |
US11501250B2 (en) * | 2019-08-09 | 2022-11-15 | Lg Electronics Inc. | Refrigerator for providing information on item using artificial intelligence and method of operating the same |
US11500611B2 (en) | 2017-09-08 | 2022-11-15 | Sonos, Inc. | Dynamic computation of system response volume |
US11513763B2 (en) | 2016-02-22 | 2022-11-29 | Sonos, Inc. | Audio response playback |
US11514898B2 (en) | 2016-02-22 | 2022-11-29 | Sonos, Inc. | Voice control of a media playback system |
US11531520B2 (en) | 2016-08-05 | 2022-12-20 | Sonos, Inc. | Playback device supporting concurrent voice assistants |
US11535424B2 (en) | 2015-07-08 | 2022-12-27 | Divert, Inc. | Methods for determining and reporting compliance with rules regarding discarded material |
US11538460B2 (en) | 2018-12-13 | 2022-12-27 | Sonos, Inc. | Networked microphone devices, systems, and methods of localized arbitration |
US11540047B2 (en) | 2018-12-20 | 2022-12-27 | Sonos, Inc. | Optimization of network microphone devices using noise classification |
US11538451B2 (en) | 2017-09-28 | 2022-12-27 | Sonos, Inc. | Multi-channel acoustic echo cancellation |
US11545169B2 (en) | 2016-06-09 | 2023-01-03 | Sonos, Inc. | Dynamic player selection for audio signal processing |
US11551678B2 (en) | 2019-08-30 | 2023-01-10 | Spotify Ab | Systems and methods for generating a cleaned version of ambient sound |
US11551669B2 (en) | 2019-07-31 | 2023-01-10 | Sonos, Inc. | Locally distributed keyword detection |
US11556306B2 (en) | 2016-02-22 | 2023-01-17 | Sonos, Inc. | Voice controlled media playback system |
US11557294B2 (en) | 2018-12-07 | 2023-01-17 | Sonos, Inc. | Systems and methods of operating media playback systems having multiple voice assistant services |
US11562740B2 (en) | 2020-01-07 | 2023-01-24 | Sonos, Inc. | Voice verification for media playback |
US11563842B2 (en) | 2018-08-28 | 2023-01-24 | Sonos, Inc. | Do not disturb feature for audio notifications |
US20230097905A1 (en) * | 2021-09-24 | 2023-03-30 | Haier Us Appliance Solutions, Inc. | Inventory management system in a refrigerator appliance |
US11641559B2 (en) | 2016-09-27 | 2023-05-02 | Sonos, Inc. | Audio playback settings for voice interaction |
US11646045B2 (en) | 2017-09-27 | 2023-05-09 | Sonos, Inc. | Robust short-time fourier transform acoustic echo cancellation during audio playback |
US11646023B2 (en) | 2019-02-08 | 2023-05-09 | Sonos, Inc. | Devices, systems, and methods for distributed voice processing |
US11696074B2 (en) | 2018-06-28 | 2023-07-04 | Sonos, Inc. | Systems and methods for associating playback devices with voice assistant services |
US11698771B2 (en) | 2020-08-25 | 2023-07-11 | Sonos, Inc. | Vocal guidance engines for playback devices |
US11710487B2 (en) | 2019-07-31 | 2023-07-25 | Sonos, Inc. | Locally distributed keyword detection |
US11714600B2 (en) | 2019-07-31 | 2023-08-01 | Sonos, Inc. | Noise classification for event detection |
US11727919B2 (en) | 2020-05-20 | 2023-08-15 | Sonos, Inc. | Memory allocation for keyword spotting engines |
US11726742B2 (en) | 2016-02-22 | 2023-08-15 | Sonos, Inc. | Handling of loss of pairing between networked devices |
US11727933B2 (en) | 2016-10-19 | 2023-08-15 | Sonos, Inc. | Arbitration-based voice recognition |
US11741948B2 (en) | 2018-11-15 | 2023-08-29 | Sonos Vox France Sas | Dilated convolutions and gating for efficient keyword spotting |
US20230290346A1 (en) * | 2018-03-23 | 2023-09-14 | Amazon Technologies, Inc. | Content output management based on speech quality |
US11769505B2 (en) | 2017-09-28 | 2023-09-26 | Sonos, Inc. | Echo of tone interferance cancellation using two acoustic echo cancellers |
US11790911B2 (en) | 2018-09-28 | 2023-10-17 | Sonos, Inc. | Systems and methods for selective wake word detection using neural network models |
US11792590B2 (en) | 2018-05-25 | 2023-10-17 | Sonos, Inc. | Determining and adapting to changes in microphone performance of playback devices |
US11790937B2 (en) | 2018-09-21 | 2023-10-17 | Sonos, Inc. | Voice detection optimization using sound metadata |
US11797263B2 (en) | 2018-05-10 | 2023-10-24 | Sonos, Inc. | Systems and methods for voice-assisted media content selection |
US11798553B2 (en) | 2019-05-03 | 2023-10-24 | Sonos, Inc. | Voice assistant persistence across multiple network microphone devices |
US11822601B2 (en) | 2019-03-15 | 2023-11-21 | Spotify Ab | Ensemble-based data comparison |
US11862161B2 (en) | 2019-10-22 | 2024-01-02 | Sonos, Inc. | VAS toggle based on device orientation |
US11869503B2 (en) | 2019-12-20 | 2024-01-09 | Sonos, Inc. | Offline voice control |
US11900937B2 (en) | 2017-08-07 | 2024-02-13 | Sonos, Inc. | Wake-word detection suppression |
US11899519B2 (en) | 2018-10-23 | 2024-02-13 | Sonos, Inc. | Multiple stage network microphone device with reduced power consumption and processing load |
EP4206895A4 (en) * | 2020-10-30 | 2024-02-14 | Qingdao Haier Refrigerator Co., Ltd | METHOD FOR PRINTING FOOD MATERIAL INFORMATION, SYSTEM FOR PRINTING FOOD MATERIAL INFORMATION AND READABLE STORAGE MEDIUM |
US11979960B2 (en) | 2016-07-15 | 2024-05-07 | Sonos, Inc. | Contextualization of voice inputs |
US11984123B2 (en) | 2020-11-12 | 2024-05-14 | Sonos, Inc. | Network device interaction by range |
US12047753B1 (en) | 2017-09-28 | 2024-07-23 | Sonos, Inc. | Three-dimensional beam forming with a microphone array |
US12062383B2 (en) | 2018-09-29 | 2024-08-13 | Sonos, Inc. | Linear filtering for noise-suppressed speech detection via multiple network microphone devices |
US12118273B2 (en) | 2020-01-31 | 2024-10-15 | Sonos, Inc. | Local voice data processing |
US12154569B2 (en) | 2017-12-11 | 2024-11-26 | Sonos, Inc. | Home graph |
US12165651B2 (en) | 2018-09-25 | 2024-12-10 | Sonos, Inc. | Voice detection optimization based on selected voice assistant service |
US12188715B2 (en) * | 2020-11-04 | 2025-01-07 | Hisense Visual Technology Co., Ltd. | Refrigerator and method for editing food information |
US12212945B2 (en) | 2017-12-10 | 2025-01-28 | Sonos, Inc. | Network microphone devices with automatic do not disturb actuation capabilities |
US12217748B2 (en) | 2017-03-27 | 2025-02-04 | Sonos, Inc. | Systems and methods of multiple voice services |
US12283269B2 (en) | 2020-10-16 | 2025-04-22 | Sonos, Inc. | Intent inference in audiovisual communication sessions |
US12314633B2 (en) | 2024-03-05 | 2025-05-27 | Sonos, Inc. | Playback device supporting concurrent voice assistants |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102377971B1 (en) * | 2018-11-16 | 2022-03-25 | 엘지전자 주식회사 | Artificial intelligence refrigerator including display apparatus |
US11499773B2 (en) * | 2019-03-29 | 2022-11-15 | Lg Electronics Inc. | Refrigerator and method for managing articles in refrigerator |
KR102745303B1 (en) * | 2019-06-28 | 2024-12-19 | 엘지전자 주식회사 | Refrigerator and method for operating the refrigerator |
KR102744974B1 (en) * | 2019-07-22 | 2024-12-23 | 엘지전자 주식회사 | An artificial intelligence apparatus for wine refrigerator and method for the same |
CN111274280A (en) * | 2019-10-25 | 2020-06-12 | 青岛海尔电冰箱有限公司 | Timing information feedback method and prediction system for refrigerator |
CN111276147A (en) * | 2019-12-30 | 2020-06-12 | 天津大学 | A method for recording diet based on voice input |
JP7433959B2 (en) * | 2020-02-13 | 2024-02-20 | 東芝ライフスタイル株式会社 | information processing system |
KR20240029966A (en) * | 2022-08-29 | 2024-03-07 | 엘지전자 주식회사 | Refrigerator and conrtol method thereof |
Family Cites Families (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002115956A (en) | 2000-10-11 | 2002-04-19 | Matsushita Electric Ind Co Ltd | Stock control refrigerator |
ITPN20010017A1 (en) * | 2001-02-23 | 2002-08-23 | Electrolux Professional Spa | KITCHEN AND / OR DOMESTIC APPLIANCE |
US6993485B2 (en) * | 2001-07-16 | 2006-01-31 | Maytag Corporation | Method and system for refrigerator with integrated presentation mode |
EP1678008B1 (en) * | 2003-10-21 | 2009-03-25 | Johnson Controls Technology Company | System and method for selecting a user speech profile for a device in a vehicle |
US20070180384A1 (en) * | 2005-02-23 | 2007-08-02 | Demetrio Aiello | Method for selecting a list item and information or entertainment system, especially for motor vehicles |
US8015014B2 (en) * | 2006-06-16 | 2011-09-06 | Storz Endoskop Produktions Gmbh | Speech recognition system with user profiles management component |
US20100049619A1 (en) * | 2006-06-28 | 2010-02-25 | Planet Payment, Inc. | Telephone-based commerce system and method |
MX2009000753A (en) * | 2006-07-20 | 2009-03-16 | Lg Electronics Inc | Operation method of interactive refrigerator system. |
US7899673B2 (en) * | 2006-08-09 | 2011-03-01 | Microsoft Corporation | Automatic pruning of grammars in a multi-application speech recognition interface |
KR101304117B1 (en) * | 2007-06-01 | 2013-09-05 | 엘지전자 주식회사 | Information input method for refrigerator |
KR100885584B1 (en) * | 2007-08-23 | 2009-02-24 | 엘지전자 주식회사 | Refrigerator and its control method |
CN101802532B (en) * | 2007-09-13 | 2012-08-08 | Lg电子株式会社 | Refrigerator |
EP2221806B1 (en) * | 2009-02-19 | 2013-07-17 | Nuance Communications, Inc. | Speech recognition of a list entry |
US9111440B2 (en) * | 2011-01-06 | 2015-08-18 | Lg Electronics Inc. | Refrigerator and remote controller |
KR20120118376A (en) * | 2011-04-18 | 2012-10-26 | 엘지전자 주식회사 | Controlling method of a refrigerator |
KR101783615B1 (en) * | 2011-08-12 | 2017-10-10 | 엘지전자 주식회사 | A system for purchasing and managing an item using a terminal and a refrigerator using the same |
US20130191243A1 (en) * | 2012-01-06 | 2013-07-25 | Lg Electronics Inc. | Terminal and a control method thereof |
WO2013105682A1 (en) * | 2012-01-13 | 2013-07-18 | 엘지전자 주식회사 | Method for controlling operation of refrigerator by using speech recognition, and refrigerator employing same |
KR102113039B1 (en) * | 2012-09-20 | 2020-06-02 | 엘지전자 주식회사 | Home appliance, method for shopping goods using the same |
CN103712410B (en) * | 2012-09-28 | 2017-05-17 | Lg电子株式会社 | Electric product |
CN203249464U (en) | 2013-03-25 | 2013-10-23 | 吴天祥 | Refrigerator with online shopping function |
KR20140139736A (en) * | 2013-05-28 | 2014-12-08 | 삼성전자주식회사 | Refrigerator and method for controlling the same |
JP6100101B2 (en) * | 2013-06-04 | 2017-03-22 | アルパイン株式会社 | Candidate selection apparatus and candidate selection method using speech recognition |
MY175230A (en) * | 2013-08-29 | 2020-06-16 | Panasonic Ip Corp America | Device control method, display control method, and purchase payment method |
US9406297B2 (en) * | 2013-10-30 | 2016-08-02 | Haier Us Appliance Solutions, Inc. | Appliances for providing user-specific response to voice commands |
US9412363B2 (en) * | 2014-03-03 | 2016-08-09 | Microsoft Technology Licensing, Llc | Model based approach for on-screen item selection and disambiguation |
CN105654270A (en) * | 2014-11-18 | 2016-06-08 | 博西华家用电器有限公司 | Refrigerator, terminal, and management system and management method for food materials in refrigerator |
US9449208B2 (en) * | 2014-12-03 | 2016-09-20 | Paypal, Inc. | Compartmentalized smart refrigerator with automated item management |
US9552816B2 (en) * | 2014-12-19 | 2017-01-24 | Amazon Technologies, Inc. | Application focus in speech-based systems |
US10476872B2 (en) * | 2015-02-20 | 2019-11-12 | Sri International | Joint speaker authentication and key phrase identification |
CN104990359A (en) | 2015-07-03 | 2015-10-21 | 九阳股份有限公司 | Intelligent refrigerator |
CN204987636U (en) | 2015-07-28 | 2016-01-20 | 谈利民 | Intelligence shopping refrigerator |
US10474987B2 (en) * | 2015-08-05 | 2019-11-12 | Whirlpool Corporation | Object recognition system for an appliance and method for managing household inventory of consumables |
US11761701B2 (en) * | 2016-01-25 | 2023-09-19 | Sun Kyong Lee | Refrigerator inventory device |
TWM527082U (en) * | 2016-04-11 | 2016-08-11 | Probe Jet Tech | Intelligent refrigerator |
KR20180048089A (en) * | 2016-11-02 | 2018-05-10 | 엘지전자 주식회사 | Refrigerator |
-
2017
- 2017-12-26 KR KR1020170179174A patent/KR102412202B1/en active Active
- 2017-12-27 CN CN201780082150.3A patent/CN110168298B/en active Active
- 2017-12-27 EP EP17890733.3A patent/EP3546869A4/en active Pending
- 2017-12-27 US US16/476,008 patent/US11521606B2/en active Active
Cited By (115)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12246882B2 (en) | 2015-07-08 | 2025-03-11 | Divert, Inc. | Systems and methods for determining shrinkage of a commodity |
US11535424B2 (en) | 2015-07-08 | 2022-12-27 | Divert, Inc. | Methods for determining and reporting compliance with rules regarding discarded material |
US11726742B2 (en) | 2016-02-22 | 2023-08-15 | Sonos, Inc. | Handling of loss of pairing between networked devices |
US11556306B2 (en) | 2016-02-22 | 2023-01-17 | Sonos, Inc. | Voice controlled media playback system |
US11750969B2 (en) | 2016-02-22 | 2023-09-05 | Sonos, Inc. | Default playback device designation |
US11212612B2 (en) | 2016-02-22 | 2021-12-28 | Sonos, Inc. | Voice control of a media playback system |
US11736860B2 (en) | 2016-02-22 | 2023-08-22 | Sonos, Inc. | Voice control of a media playback system |
US12047752B2 (en) | 2016-02-22 | 2024-07-23 | Sonos, Inc. | Content mixing |
US11983463B2 (en) | 2016-02-22 | 2024-05-14 | Sonos, Inc. | Metadata exchange involving a networked playback system and a networked microphone system |
US11513763B2 (en) | 2016-02-22 | 2022-11-29 | Sonos, Inc. | Audio response playback |
US11405430B2 (en) | 2016-02-22 | 2022-08-02 | Sonos, Inc. | Networked microphone device control |
US11514898B2 (en) | 2016-02-22 | 2022-11-29 | Sonos, Inc. | Voice control of a media playback system |
US11832068B2 (en) | 2016-02-22 | 2023-11-28 | Sonos, Inc. | Music service selection |
US11863593B2 (en) | 2016-02-22 | 2024-01-02 | Sonos, Inc. | Networked microphone device control |
US11545169B2 (en) | 2016-06-09 | 2023-01-03 | Sonos, Inc. | Dynamic player selection for audio signal processing |
US11979960B2 (en) | 2016-07-15 | 2024-05-07 | Sonos, Inc. | Contextualization of voice inputs |
US11531520B2 (en) | 2016-08-05 | 2022-12-20 | Sonos, Inc. | Playback device supporting concurrent voice assistants |
US12149897B2 (en) | 2016-09-27 | 2024-11-19 | Sonos, Inc. | Audio playback settings for voice interaction |
US11641559B2 (en) | 2016-09-27 | 2023-05-02 | Sonos, Inc. | Audio playback settings for voice interaction |
US11727933B2 (en) | 2016-10-19 | 2023-08-15 | Sonos, Inc. | Arbitration-based voice recognition |
US12217748B2 (en) | 2017-03-27 | 2025-02-04 | Sonos, Inc. | Systems and methods of multiple voice services |
US11900937B2 (en) | 2017-08-07 | 2024-02-13 | Sonos, Inc. | Wake-word detection suppression |
US12141502B2 (en) | 2017-09-08 | 2024-11-12 | Sonos, Inc. | Dynamic computation of system response volume |
US11500611B2 (en) | 2017-09-08 | 2022-11-15 | Sonos, Inc. | Dynamic computation of system response volume |
US11758232B2 (en) * | 2017-09-21 | 2023-09-12 | Amazon Technologies, Inc. | Presentation and management of audio and visual content across devices |
US11330335B1 (en) * | 2017-09-21 | 2022-05-10 | Amazon Technologies, Inc. | Presentation and management of audio and visual content across devices |
US20220303630A1 (en) * | 2017-09-21 | 2022-09-22 | Amazon Technologies, Inc. | Presentation and management of audio and visual content across devices |
US12217765B2 (en) | 2017-09-27 | 2025-02-04 | Sonos, Inc. | Robust short-time fourier transform acoustic echo cancellation during audio playback |
US11646045B2 (en) | 2017-09-27 | 2023-05-09 | Sonos, Inc. | Robust short-time fourier transform acoustic echo cancellation during audio playback |
US12236932B2 (en) | 2017-09-28 | 2025-02-25 | Sonos, Inc. | Multi-channel acoustic echo cancellation |
US11538451B2 (en) | 2017-09-28 | 2022-12-27 | Sonos, Inc. | Multi-channel acoustic echo cancellation |
US12047753B1 (en) | 2017-09-28 | 2024-07-23 | Sonos, Inc. | Three-dimensional beam forming with a microphone array |
US11769505B2 (en) | 2017-09-28 | 2023-09-26 | Sonos, Inc. | Echo of tone interferance cancellation using two acoustic echo cancellers |
US11288039B2 (en) | 2017-09-29 | 2022-03-29 | Sonos, Inc. | Media playback system with concurrent voice assistance |
US11893308B2 (en) | 2017-09-29 | 2024-02-06 | Sonos, Inc. | Media playback system with concurrent voice assistance |
US12212945B2 (en) | 2017-12-10 | 2025-01-28 | Sonos, Inc. | Network microphone devices with automatic do not disturb actuation capabilities |
US12154569B2 (en) | 2017-12-11 | 2024-11-26 | Sonos, Inc. | Home graph |
US11343614B2 (en) | 2018-01-31 | 2022-05-24 | Sonos, Inc. | Device designation of playback and network microphone device arrangements |
US11689858B2 (en) | 2018-01-31 | 2023-06-27 | Sonos, Inc. | Device designation of playback and network microphone device arrangements |
US20190267000A1 (en) * | 2018-02-23 | 2019-08-29 | Accenture Global Solutions Limited | Adaptive interactive voice response system |
US11404057B2 (en) * | 2018-02-23 | 2022-08-02 | Accenture Global Solutions Limited | Adaptive interactive voice response system |
US20230290346A1 (en) * | 2018-03-23 | 2023-09-14 | Amazon Technologies, Inc. | Content output management based on speech quality |
US11797263B2 (en) | 2018-05-10 | 2023-10-24 | Sonos, Inc. | Systems and methods for voice-assisted media content selection |
US11792590B2 (en) | 2018-05-25 | 2023-10-17 | Sonos, Inc. | Determining and adapting to changes in microphone performance of playback devices |
US11696074B2 (en) | 2018-06-28 | 2023-07-04 | Sonos, Inc. | Systems and methods for associating playback devices with voice assistant services |
US11482978B2 (en) | 2018-08-28 | 2022-10-25 | Sonos, Inc. | Audio notifications |
US11563842B2 (en) | 2018-08-28 | 2023-01-24 | Sonos, Inc. | Do not disturb feature for audio notifications |
US11432030B2 (en) | 2018-09-14 | 2022-08-30 | Sonos, Inc. | Networked devices, systems, and methods for associating playback devices based on sound codes |
US11778259B2 (en) | 2018-09-14 | 2023-10-03 | Sonos, Inc. | Networked devices, systems and methods for associating playback devices based on sound codes |
US11315553B2 (en) * | 2018-09-20 | 2022-04-26 | Samsung Electronics Co., Ltd. | Electronic device and method for providing or obtaining data for training thereof |
US12230291B2 (en) | 2018-09-21 | 2025-02-18 | Sonos, Inc. | Voice detection optimization using sound metadata |
US11790937B2 (en) | 2018-09-21 | 2023-10-17 | Sonos, Inc. | Voice detection optimization using sound metadata |
US12165651B2 (en) | 2018-09-25 | 2024-12-10 | Sonos, Inc. | Voice detection optimization based on selected voice assistant service |
US12165644B2 (en) | 2018-09-28 | 2024-12-10 | Sonos, Inc. | Systems and methods for selective wake word detection |
US11790911B2 (en) | 2018-09-28 | 2023-10-17 | Sonos, Inc. | Systems and methods for selective wake word detection using neural network models |
US12062383B2 (en) | 2018-09-29 | 2024-08-13 | Sonos, Inc. | Linear filtering for noise-suppressed speech detection via multiple network microphone devices |
US11899519B2 (en) | 2018-10-23 | 2024-02-13 | Sonos, Inc. | Multiple stage network microphone device with reduced power consumption and processing load |
US11741948B2 (en) | 2018-11-15 | 2023-08-29 | Sonos Vox France Sas | Dilated convolutions and gating for efficient keyword spotting |
US11557294B2 (en) | 2018-12-07 | 2023-01-17 | Sonos, Inc. | Systems and methods of operating media playback systems having multiple voice assistant services |
US12288558B2 (en) | 2018-12-07 | 2025-04-29 | Sonos, Inc. | Systems and methods of operating media playback systems having multiple voice assistant services |
US11538460B2 (en) | 2018-12-13 | 2022-12-27 | Sonos, Inc. | Networked microphone devices, systems, and methods of localized arbitration |
US11211071B2 (en) * | 2018-12-14 | 2021-12-28 | American International Group, Inc. | System, method, and computer program product for home appliance care |
US11540047B2 (en) | 2018-12-20 | 2022-12-27 | Sonos, Inc. | Optimization of network microphone devices using noise classification |
US11646023B2 (en) | 2019-02-08 | 2023-05-09 | Sonos, Inc. | Devices, systems, and methods for distributed voice processing |
US11315556B2 (en) | 2019-02-08 | 2022-04-26 | Sonos, Inc. | Devices, systems, and methods for distributed voice processing by transmitting sound data associated with a wake word to an appropriate device for identification |
US11822601B2 (en) | 2019-03-15 | 2023-11-21 | Spotify Ab | Ensemble-based data comparison |
US11217250B2 (en) * | 2019-03-27 | 2022-01-04 | Samsung Electronics Co., Ltd. | Multi-modal interaction with intelligent assistants in voice command devices |
US11798553B2 (en) | 2019-05-03 | 2023-10-24 | Sonos, Inc. | Voice assistant persistence across multiple network microphone devices |
US11200888B2 (en) * | 2019-05-09 | 2021-12-14 | Lg Electronics Inc. | Artificial intelligence device for providing speech recognition function and method of operating artificial intelligence device |
US11200894B2 (en) | 2019-06-12 | 2021-12-14 | Sonos, Inc. | Network microphone device with command keyword eventing |
US11361756B2 (en) | 2019-06-12 | 2022-06-14 | Sonos, Inc. | Conditional wake word eventing based on environment |
US11501773B2 (en) | 2019-06-12 | 2022-11-15 | Sonos, Inc. | Network microphone device with command keyword conditioning |
US11854547B2 (en) | 2019-06-12 | 2023-12-26 | Sonos, Inc. | Network microphone device with command keyword eventing |
US11335089B2 (en) * | 2019-07-01 | 2022-05-17 | Zhejiang Normal University | Food detection and identification method based on deep learning |
US20220319511A1 (en) * | 2019-07-22 | 2022-10-06 | Lg Electronics Inc. | Display device and operation method for same |
US11551669B2 (en) | 2019-07-31 | 2023-01-10 | Sonos, Inc. | Locally distributed keyword detection |
US12093608B2 (en) | 2019-07-31 | 2024-09-17 | Sonos, Inc. | Noise classification for event detection |
US11714600B2 (en) | 2019-07-31 | 2023-08-01 | Sonos, Inc. | Noise classification for event detection |
US11710487B2 (en) | 2019-07-31 | 2023-07-25 | Sonos, Inc. | Locally distributed keyword detection |
US12211490B2 (en) | 2019-07-31 | 2025-01-28 | Sonos, Inc. | Locally distributed keyword detection |
US11501250B2 (en) * | 2019-08-09 | 2022-11-15 | Lg Electronics Inc. | Refrigerator for providing information on item using artificial intelligence and method of operating the same |
US10965489B2 (en) * | 2019-08-30 | 2021-03-30 | Lg Electronics Inc. | Artificial intelligence refrigerator and method for controlling the same |
US11551678B2 (en) | 2019-08-30 | 2023-01-10 | Spotify Ab | Systems and methods for generating a cleaned version of ambient sound |
US20210081883A1 (en) * | 2019-09-18 | 2021-03-18 | Divert, Inc. | Systems and methods for determining compliance with an entity's standard operating procedures |
US11862161B2 (en) | 2019-10-22 | 2024-01-02 | Sonos, Inc. | VAS toggle based on device orientation |
US11869503B2 (en) | 2019-12-20 | 2024-01-09 | Sonos, Inc. | Offline voice control |
US11562740B2 (en) | 2020-01-07 | 2023-01-24 | Sonos, Inc. | Voice verification for media playback |
US12118273B2 (en) | 2020-01-31 | 2024-10-15 | Sonos, Inc. | Local voice data processing |
US11961519B2 (en) | 2020-02-07 | 2024-04-16 | Sonos, Inc. | Localized wakeword verification |
US11308958B2 (en) | 2020-02-07 | 2022-04-19 | Sonos, Inc. | Localized wakeword verification |
US11308959B2 (en) | 2020-02-11 | 2022-04-19 | Spotify Ab | Dynamic adjustment of wake word acceptance tolerance thresholds in voice-controlled devices |
US11810564B2 (en) | 2020-02-11 | 2023-11-07 | Spotify Ab | Dynamic adjustment of wake word acceptance tolerance thresholds in voice-controlled devices |
US11328722B2 (en) * | 2020-02-11 | 2022-05-10 | Spotify Ab | Systems and methods for generating a singular voice audio stream |
US11482224B2 (en) | 2020-05-20 | 2022-10-25 | Sonos, Inc. | Command keywords with input detection windowing |
US11727919B2 (en) | 2020-05-20 | 2023-08-15 | Sonos, Inc. | Memory allocation for keyword spotting engines |
US11308962B2 (en) * | 2020-05-20 | 2022-04-19 | Sonos, Inc. | Input detection windowing |
US12119000B2 (en) * | 2020-05-20 | 2024-10-15 | Sonos, Inc. | Input detection windowing |
US20220319513A1 (en) * | 2020-05-20 | 2022-10-06 | Sonos, Inc. | Input detection windowing |
US20230352024A1 (en) * | 2020-05-20 | 2023-11-02 | Sonos, Inc. | Input detection windowing |
US11694689B2 (en) * | 2020-05-20 | 2023-07-04 | Sonos, Inc. | Input detection windowing |
US11698771B2 (en) | 2020-08-25 | 2023-07-11 | Sonos, Inc. | Vocal guidance engines for playback devices |
US12159085B2 (en) | 2020-08-25 | 2024-12-03 | Sonos, Inc. | Vocal guidance engines for playback devices |
US12283269B2 (en) | 2020-10-16 | 2025-04-22 | Sonos, Inc. | Intent inference in audiovisual communication sessions |
EP4206895A4 (en) * | 2020-10-30 | 2024-02-14 | Qingdao Haier Refrigerator Co., Ltd | METHOD FOR PRINTING FOOD MATERIAL INFORMATION, SYSTEM FOR PRINTING FOOD MATERIAL INFORMATION AND READABLE STORAGE MEDIUM |
US12188715B2 (en) * | 2020-11-04 | 2025-01-07 | Hisense Visual Technology Co., Ltd. | Refrigerator and method for editing food information |
US11984123B2 (en) | 2020-11-12 | 2024-05-14 | Sonos, Inc. | Network device interaction by range |
CN115077160A (en) * | 2021-03-10 | 2022-09-20 | 松下电器研究开发(苏州)有限公司 | Refrigerator, intelligent refrigerator system |
CN115077158A (en) * | 2021-03-10 | 2022-09-20 | 松下电器研究开发(苏州)有限公司 | Refrigerator, intelligent refrigerator system |
CN115077155A (en) * | 2021-03-10 | 2022-09-20 | 松下电器研究开发(苏州)有限公司 | Refrigerator and intelligent refrigerator system |
CN115077157A (en) * | 2021-03-10 | 2022-09-20 | 松下电器研究开发(苏州)有限公司 | Refrigerator and intelligent refrigerator system |
CN115077161A (en) * | 2021-03-10 | 2022-09-20 | 松下电器研究开发(苏州)有限公司 | Refrigerator and intelligent refrigerator system |
CN115077162A (en) * | 2021-03-10 | 2022-09-20 | 松下电器研究开发(苏州)有限公司 | Refrigerator and intelligent refrigerator system |
US12105775B2 (en) * | 2021-09-24 | 2024-10-01 | Haier Us Appliance Solutions, Inc. | Inventory management system in a refrigerator appliance |
US20230097905A1 (en) * | 2021-09-24 | 2023-03-30 | Haier Us Appliance Solutions, Inc. | Inventory management system in a refrigerator appliance |
US12314633B2 (en) | 2024-03-05 | 2025-05-27 | Sonos, Inc. | Playback device supporting concurrent voice assistants |
Also Published As
Publication number | Publication date |
---|---|
EP3546869A4 (en) | 2019-11-27 |
KR102412202B1 (en) | 2022-06-27 |
CN110168298B (en) | 2023-01-03 |
CN110168298A (en) | 2019-08-23 |
EP3546869A1 (en) | 2019-10-02 |
KR20180080112A (en) | 2018-07-11 |
US11521606B2 (en) | 2022-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11521606B2 (en) | Refrigerator and information display method thereof | |
US11671386B2 (en) | Electronic device and method for changing chatbot | |
KR102501714B1 (en) | Device and method for providing response message to user’s voice input | |
US11687319B2 (en) | Speech recognition method and apparatus with activation word based on operating environment of the apparatus | |
US11367434B2 (en) | Electronic device, method for determining utterance intention of user thereof, and non-transitory computer-readable recording medium | |
US11721333B2 (en) | Electronic apparatus and control method thereof | |
US12118274B2 (en) | Electronic device and control method thereof | |
US12298879B2 (en) | Electronic device and method for controlling same | |
US11599928B2 (en) | Refrigerator and method for managing products in refrigerator | |
US20200133211A1 (en) | Electronic device and method for controlling electronic device thereof | |
US20230036080A1 (en) | Device and method for providing recommended words for character input | |
US20190325224A1 (en) | Electronic device and method for controlling the electronic device thereof | |
US20200244791A1 (en) | Electronic device and control method thereof | |
US20230290343A1 (en) | Electronic device and control method therefor | |
KR20200044175A (en) | Electronic apparatus and assistant service providing method thereof | |
US11468270B2 (en) | Electronic device and feedback information acquisition method therefor | |
US11347805B2 (en) | Electronic apparatus, method for controlling the same, and non-transitory computer readable recording medium | |
US20210350797A1 (en) | System and method for providing voice assistance service |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHUN, EUN JIN;DO, YOUNG SOO;LEE, HYOUNG JIN;AND OTHERS;REEL/FRAME:053296/0507 Effective date: 20200723 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |